2.2 Direct Appearance Model
This method (X. Hou and Cheng, 2001) is derived
from the classical AAM by eliminating the joint PCA
on texture (Eq.2) and form. It uses the texture infor-
mation directly for the prediction of the form: the es-
timation of position and appearance. It comes from
the fact that we could extract the form directly from
texture. The form and texture are built by PCA. The
main difference between the DAM and the AAM is
in the third PCA. We collect the difference between
the textures generated by small displacements in each
image of the training data base and by carrying out
a PCA on these differences so as to have a matrix of
projection H
T
. The difference in texture is projected
on under space such as:
δg
′
= H
T
∗δg (7)
The dimension of δg
′
present a quarter of the dimen-
sion of δg and the prediction is more stable. The re-
gression in the DAM then requires less memory than
the regression used in the AAM. The procedure of re-
search is the same one as the classical AAM except
for the prediction of the new form and texture. In [5]
it is shown that the size of the matrix of regression is
11, 83 lower than that of AAM.
2.3 Active Wavelet Networks
This method (Hu et al., 2003) uses the wavelets as
alternative to the PCA in order to reduce the dimen-
sion of space. It uses a Gabor Wavelet network (Hu
et al., 2003) to model the variations of the texture of
the training base. The GWN approach represents im-
age with a linear combination of functions of 2D Ga-
bor. The given weights are of the form to preserve the
maximum information contained in image for a fixed
wavelet number. The method of search of faces (or
unspecified object) is the same one as that of the clas-
sical AAM by disturbing the initial positions and to
put a linear relation (matrices of regressions) between
the displacement of the parameters and pixels error.
The DAM and AWN methods make it possible to
reduce the required memory to store RM. In the fol-
lowing section we propose a method to remove the
space allocated to store these matrices.
3 AAM OPTIMIZATION
We propose to use the Training part (Section 2.1.1),
by optimizing the search (Section 2.1.1) appearance
(Eq.2) and pose (Eq.3) parameters by using Nelder
Mead Simplex (SP)[2]. It is a numerical method of
optimization which will allow us to find solutions
minimizing pixels error. This method gives us the
possibility of converging in population (together of
solution convergent toward the same minimum) mak-
ing the solution more stable, to be direct: no cal-
culation of derivative and to converge in a number
of iteration which is rather tiny compared to another
global optimization methods requiring a great num-
ber of iteration like the Genetic Algorithms, Simu-
lated Annealing... They will also enable us to reduce
required memory used in classical AAM optimization
by preserving only the average model; we don’t have
to store RM.
3.1 Nelder Mead Simplex Algorithm
The simplex of Nelder Mead (NELDER and MEAD,
1965) makes it possible to find the minimum of func-
tion of several variables in an iterative way. For two
variables the simplex is a triangle and the method con-
sists of comparing the values of the function on each
top of the triangle. Thus the top where the function is
highest is rejected to be replaced by another top which
will be calculated according to the precedents. The
algorithm is called simplex considering the general-
ization of the triangle in n dimensions. The stopping
criterion of algorithm will be a threshold of the dif-
ference between the values of the function to be min-
imized given by the current solutions. This threshold
will determine number of iterations necessary to con-
verge. The error to be minimized is the pixels error
[Eq.11] used by the classical AAM.
For new solutions, all the operators of search base
themselves on a center of gravity x
c
=
1
n
n
∑
i=1
x
k
i
calcu-
lated compared to the current solutions in each iter-
ation and giving a direction d
k
= x
c
−x
k
n+1
worms
of the solutions minimizing the error function. Let
us note E the objective function to be minimized.
The operators of search for solutions minimizing this
function are as follows:
• The Reflection: we test the point which is in the
opposite direction of the bad solution:
x
r
= x
k
n+1
+ 2d
k
= 2x
c
−x
k
n+1
. (8)
• The Expansion: we prolong research beyond the
point of reflection by testing the solution:
x
e
= x
k
n+1
+ 3d
k
= 2x
r
−x
c
. (9)
• The Contraction: if the previous two operators of
search fail then we minimize tests points close to
share and other of the current solution:
x
−
= x
k
n+1
+
1
2
d
k
=
1
2
(x
k
n+1
+ x
c
)
x
+
= x
k
n+1
+
3
2
d
k
=
1
2
(x
r
+ x
c
)
(10)