blance, capturing the features at a global level doesn’t
help in discriminating between the eye region of male
face and female face. For this reason, we captured the
information at a local level by using a small window.
This helps us to capture the small spatial information
of the eye region for both gender. The window covers
the full image size by sliding horizontally and verti-
cally. The result of the other window sizes we tried
is shown in table 3. The third reason is the choice of
classifiers. SVM, KNN and C4.5 are shown to pro-
duce a good result for a gender recognition problem
in the survey made by (Ng et al., 2012). It is also no
surprise, in addition to the quality features we had,
that we obtained excellent result with Bagging, Ran-
dom Forest and Adaboost since they combine results
from a number of week classifiers. The number of
weak classifiers (trees) used are 800, 500 and 90 for
Bagging, Radom Forest and Adaboost, respectively.
Table 3: Accuracy(%) of different window size for different
classifiers.
Window size Bagging Random Forest Adaboost KNN SVM
6x5 100 100 100 100 100
10x10 73.75 71.25 75 63.75 73.75
10x12 73.75 73.75 78.75 60 75.25
15x20 63.75 67.5 72.5 56.25 68.5
30x60 65 66.25 62.5 65 59.25
We have also compared our result with the method
considered as one of the state of the art results pro-
posed at (Alexandre, 2010). As we repeatedly ex-
plained before, it is difficult to compare results from
gender recognition algorithms since authors use dif-
ferent databases to test their algorithm. However,
comparisons are usually made in the literature be-
tween algorithms tested on images from the same
database (not necessarily the same images). For this,
since we selected the images from a FERET (Phillips
et al., 1998) dataset, we compared our result with
(Alexandre, 2010) since this method is also tested on
FERET as shown in table 2. One miss classified im-
age is reported in (Alexandre, 2010) from 107 images.
However, the high recognition accuracy is obtained
owing to the very complex approach used. Consid-
ering the complexity of the proposed multiscale deci-
sion fusion approach, we believe that our method is
simple and achieves a very good accuracy.
6 CONCLUSION AND FUTURE
WORK
This study presented the face region correlated with
gender recognition for both female and male faces by
studying the HVS. This conclusion is made by record-
ing eye movements of 15 observers as they performed
a gender recognition task on 20 images chosen from
FERET database. The constructed gaze map and sta-
tistical analysis of the collected fixations show that the
eye region is the most salient for gender recognition.
The localized salient region can be used by the com-
puter vision community to have an insight from where
to extract discriminative descriptors as this is the most
important step when developing robust and efficient
automatic gender recognition algorithms. In addition
the extraction of descriptors from only the salient re-
gion would result in a fast system as it reduces the
computational complexity of the algorithms.
We also proposed a novel framework for au-
tomatic gender recognition based on the localized
salient region. We have achieved a high recognition
accuracy by processing only salient region of a face.
In this paper, we only considered frontal faces
when we created stimuli for the experiment. In the
future, we have plans to include different face orien-
tations into the stimuli and see if the same conclusion
can be made. We also intend to occlude the eye region
and find out if secondary information can be used for
gender recognition, (i.e if gender recognition is hier-
archical). Including different image resolution in the
stimuli is also one of the many works we planned in
the future.
ACKNOWLEDGEMENTS
This paper is supported by the Erasmus Mundus
Scholarship 2012-2014 sponsored by the European
Union, and CNRS.
REFERENCES
Alexandre, L. A. (2010). Gender recognition: A multiscale
decision fusion approach. Pattern Recognition Let-
ters, 31(11):1422–1427.
Andreu, Y. and Mollineda, R. A. (2008). The role of face
parts in gender recognition. In Image Analysis and
Recognition, pages 945–954. Springer.
BrownU, E. and Perrett, D. (1993). What gives a face its
gender. Perception, 22:829–840.
Bruce, V., Burton, A. M., Hanna, E., Healey, P., Mason, O.,
Coombes, A., Fright, R., and Linney, A. (1993). Sex
discrimination: how do we tell the difference between
male and female faces? Perception.
Buchala, S., Davey, N., Frank, R. J., Gale, T. M., Loomes,
M. J., and Kanargard, W. (2004). Gender classifica-
tion of face images: The role of global and feature-
based information. In Neural Information Processing,
pages 763–768. Springer.
HumanVisualSystemBasedFrameworkForGenderRecognition
259