
Figure 8: Centroid and Successive comparison reduction
5 FINE ADJUSTMENT
The recognition method proposed here tends to iden-
tify the same face several times, on adjacent squares,
as shown in figure 8.
However, this cannot be considered as a disad-
vantage, since all these multiple detections can be
used to validate and find the correct face position
through a process called Centroid Reduction.
Let rectangles A and B be called “Centroid
neighbors” if the centroid of A is within rectangle B
and vice-versa. This definition can be extended to n
rectangles. Figure 8 shows many centroid neighbors,
shown in yellow color.
The centroid reduction method replaces all
“centroid neighbors” by the average square. Squares
that do not have any neighbors are ignored because
they are more likely to be a mistake made by the
ANN. Figure 8 shows an example of centroid reduc-
tion. All yellow squares are replaced by another one
that is computed using the average. The red square
in figure 8 is ignored during the process because it
does not have any “centroid neighbors”.
After centroid reduction, another fine adjust-
ment is made. It is called successive comparison
reduction. From the face position that is the output
of centroid reduction, other faces are generated by
shifting the original rectangle shape one pixel to the
left, right, bottom and top. Other faces are also cre-
ated from the original one, having one pixel width
and height less. All of these faces along with the
original image are again submitted to the neural
network, in a greedy search recursive process, where
the actual face is the higher output value of the neu-
ral network. This process repeats itself until there is
no change in rectangle position between two phases.
Figure 8 also shows the successive comparison ad-
justment method. The result rectangle is smaller and
more adapted to the face than the input square.
6 FACE ELEMENTS
LOCALIZATION
Once the face is located, simple techniques can be
developed to find each face element of the subject.
This is possible because if the smallest rectangle that
contains the face is known, it can be said that all
face elements respect a specific geometry that helps
the task of finding every sub-part of face, such as
eyes, mouth and nose.
6.1 Eye pattern localization
With the smallest rectangle that covers the face, it is
possible to guarantee that the eyes are on the supe-
rior half of this rectangle. It is also possible to admit
that if this superior part is again divided in two
halves, one will be very likely to contain the left eye
and the other the right one.
So, to obtain the right position of both eyes, 2d
cross correlation between the right and left halves
and an eye standard pattern are computed. Equation
4 shows the cross correlation function where M and
N are the dimensions of the picture. Figure 9 shows
the eye pattern.
11
00
1
)() ()(
MN
mn
fxy gxy fxygx my n
MN
−−
==
,∗ , = , +,+
∑∑
(4)
Figure 9: Eye pattern
Thus, eye masks are generated admitting an
80% tolerance of the maximum correlation value.
When both masks are computed, final eyes positions
are obtained. Figure 10 represents the process, and
figure 11 shows the positions computed.
6.2 Mouth pattern localization
Mouth can be easier identified than eyes. First, the
original face rectangle is divided horizontally. Then,
the mouth’s edges are computed using a Laplacian
method (Gonzalez, 1992) showed in equation 5:
22
2
22
fxy fxy fx
xy
∂∂
,= ,+ ,
∂∂
"
(5)
Edges on the lateral sides of the image are ig-
nored and a fill algorithm is applied. This
morphological algorithm changes black points into
white when all pixels in D8 neighborhood are white
(Gonzalez, 1992), this process repeats itself up to the
point where no change occurs in the image.
FACE PATTERN DETECTION - An approach using neural networks
175