Previous works (Khiari, 2011) on SP showed that
there is no favourite orientation for all tests. Each
one has its own contribution. That’s why; the
strategy of fusing all sub-bands has been adopted, so
as to take advantage of complementarities between
different orientations.
Operating sub-band fusion: By adopting the
alternative of fusion, several parameters had to be
fixed. Among them, is the choice of the score fusion
rule. Usual operators such as maximum score, scores
product and scores sum were tried. Best results were
achieved with Sum rule, with an augmentation
reaching 4.4% compared to the best band used
separately. Another question was about the number
of features to keep on every sub-band. two options
were tested:
Same feature number for all sub-bands :
Referring to results of Test 3.b in table 2, this kind
of fusion is better than applying Adaboost on the
whole SP. Moreover, it is much less time
consuming, making possible to increase the number
of considered features.
Weighting feature number per sub-band : While
conducting the test of Adaboost on the entirety of
the SP, it has been noticed that the number of
selected features was not the same for all sub-bands.
Based on this observation, the idea of weighting the
number of features for every sub-band (oriented at
all scales, high-pass and low-pass) was suggested.
Once the total number of features is fixed, the
weights were attributed based on feature distribution
found in Test 3.a as follows: 2%; 12.75%; 13.5%;
10.25%; 18%; 15.25%; 18.5%; 9.75% respectively
for High-pass, oriented band-pass 1 to 6, and low-
pass sub-bands. As instance, assuming a total feature
number of 800, Adaboost selects respectively: 16,
102, 108, 82, 144, 122, 148 and 78 features from the
pre-cited sub-bands. Almost experiments were
improved attaining 0.8% of enhancement (table 2,
Test 3.c) when compared to taking the same feature
number for all sub-bands.
Another possibility was to weight the scores of
the different classifiers before the sum fusion:
Weighting scores of classifiers with same number
of features : This is equivalent to a weighted sum-
rule at the score level while keeping the same
number of features for all sub-bands. Assuming that
E
sb
is the EER of Sub-band sb, then, the weights W
sb
associated to the scores of sub-band sb are
calculated from Equation 2 (Su, 2009).
An improvement reaching 0.7% was denoted
(table 2, Test 3.d) when compared to taking the same
feature number for all sub-bands.
with
and and nb_sb the total number of sub-bands.
Weighting scores of classifiers with weighted
feature numbers : A final try, was to proceed by
weighting at the characterization level (feature
numbers) as well as at the score level (weighted sum
rule). This strategy of fusion gave almost best results
for the four experiments (table 2, Test 3.e).
To summarize, the method having the optimum
configuration was then to filter the entire (64x64)
image by a 6-orientation and 3-scale Steerable
Pyramid. Then, apply Adaboost on each sub-band
(oriented at all scales, high-pass and low-pass) with
weighted numbers of features. Afterward, score fusion
was operated on classifiers by weighted sum rule.
Table 2 illustrates also the evaluation of the
proposed method put side by side with the other
ones. It can be seen that combining SP to Adaboost
improves considerably the performance of SP and
Adaboost applied separately. On another hand, in a
comparison to PCA1 (Chaari, 2009) enhancements
are obvious in all experiments.
Regarding PCA2, it has to be underlined that the
training set on which the face space has been
constructed isn’t the same as indicated by the
protocol. In fact, it is built using 300 images from
BANCA database (30 subjects, 10 images per
subject) (Petrovska, 2009) with 3 different quality
images. While proposed method strictly followed the
protocol using only 156 images of 52 individuals (3
images/person) acquired under quite good
conditions, which is not the case of the test subsets
where many variations are present. The small
number of trained images besides the different
acquisition conditions between training and test
subsets constitutes an additional challenge, which
explains the results obtained in experiment 4 that are
better than ours. Despites, proposed method
outperforms PCA2 in the first three experiments.
Compared to LDA, except for the first controlled
scenario, proposed method achieves higher performance
in the other more challenging ones. But it still remains
less robust than LDA/Gabor which is a combining
approach of a projection-based method (LDA) and a
space-scale feature-extraction method (Gabor).
5 CONCLUSIONS
Through this work, a combining approach based on
BIOSIGNALS 2012 - International Conference on Bio-inspired Systems and Signal Processing
%p