Therefore the proposed system of algorithms
consists on the three following tasks:
First we estimate the L
2
measure of the
probabilistic dependence with orthogonal basis in
order to realize different multivariate extractors
resulting from a number of initializations of the used
numerical optimizing procedure.
Second, an estimation of the miss classification
error is computed for each solution by the modified
kernel estimate of the conditional probability density
functions in the context of the optimal smoothing
parameter. Such parameter is obtained by
minimizing the Mean Integrate Square Error
(MISE). The Plug in algorithm tries to provide this
solution.
Third, the sub space which presents the
minimum of the miss classification values is
therefore chosen.
In each task, the numerical optimization procedure
does not necessarily give optimal solutions. So we
cannot be sure that we are realizing Bayesian
classifier at each time.
4 FACE CLASSIFICATION
PROCESS
In face recognition, a huge number of classifiers
were introduced in the litterature having various
success rate according to the application type. While
feature extraction is compulsory phase of a pattern
recognition system, this approche is offering a
convenient departure of the the classification of the
feature vectors, which oriented the researches in the
face recognition domain to introduce a large number
of face feature extraction methods.
In this study, we have employed the “BioID”
dataset (Jesorsky et al., 2001), composed of 1521
images in gray level of 23 faces of frontal view, for
each face image of this database 20 feature points
are displayed. For any used algorithm, facial
recognition is accomplished in four step process: the
acquisition, the face detection, the feature extraction
and finally the classification. In this paragraph we
will describe the details of all the steps used in our
work to accomplish the face classification process.
As a first step, we selected the Adaboost to
detect the face and characteristic features location
knowing that the most of the existing methods for
facial feature extraction assume that at least coarse
location of the face is detected. Then, after this
operation, the computational complexity of the facial
feature extraction can be significantly reduced.
In the second step, we move on to the face
normalization which is very important stage for the
recognition algorithms. First we start with a
geometric normalization resumed in performing a
rotation of face to align the axis of the eye with the
horizontal axis and then we recover a face image
whose distance between centers of the eyes is fixed.
The dimensions of the face image are retrieved from
the distance between the obtained eyes centers. In
this phase we set the position of the mouth center in
the normalized image in order to get acceptable
column normalization and to ensure that the
different face parts (eyes, mouth and nose) are in the
same position for all faces. We apply next an
increase in the dynamics to the normalized image,
which is based on a decrease in the center of the
image histogram to achieve images with the same
ranges of distribution of gray levels and an average
alignment of these levels. Second we apply an
illumination normalization using the histogram
equalization to re-calibrate the grayscale image
leading to better contrast and a gamma correction to
reduce the gap between light and dark areas of the
face using a nonlinear transformation of grayscale
(Fig 1).
When the facial regions could be retrieved, the
analysis will be focused on the facial features. The
adopted method locates features coarsely by
searching areas with low intensity among possible
face regions. This approach involve basic computer
vision operations such morphology and projection
analysis. The morphological operators can fit to it
regarding their easy and fast implementation added
to their strong nature. The projections also could be
computed easily and are convenient with real-time
applications. Some of the dark side of the projection
methods is that the gray scale information is deeply
influenced by a variation of illumination conditions
and noise. Due to that, projection curves are not
smooth, which keeps them difficult to be analyzed
automatically. And therefore will try to use the
geometrical face model hand to hand with the
projections and morphology to avoid such problems.
First, we apply the projection analysis: the gray-
level intensity for the facial features is too weak in
the image compared to close neighbors. So, the
position of facial features can be reached by
projections of the image. The significant minima are
issued from the retrieved horizontal projection. For
these minima the vertical projection is calculated
and significant minima are searched for again. The
obtained results will be treated as feature candidates.
Next and once we apply horizontal projection to the
facial images and we got the base lines, a
morphological operator will be used to find the eyes
position. And since the position of eyes and
intraocular distance are almost similar for most of
ICPRAM2013-InternationalConferenceonPatternRecognitionApplicationsandMethods
644