tance maps preprocessing. Finally, Section 5 con-
cludes the paper with final remarks.
2 AAM: ACTIVE APPEARANCE
MODEL
The Active Appearance Model method is a de-
formable model method which allows shapes and tex-
tures to be conjointly synthesized. AAMs, proposed
by Edwards, Cootes and Taylor in 1998, are based
on a priori knowledge of shapes (points of interests
connected to each other) and shape-free textures of
a training database. AAMs can thus be used to gen-
erate a set of plausible representations of shapes and
textures of the learned objects. They also allow the
search for objects in images by jointly using shape
and texture information. This research is performed
by an optimization process on model parameters, in
order to match the model as well as possible on the
image zone containing the object. This method pro-
ceeds in three steps (briefly explained):
• A training phase in which the model and his defor-
mation parameters are created.
A Principal Component Analysis (PCA) on a shape
training base and a PCA on a shape-free texture
training base are applied respectively in order to
create the statistical shape and texture models given
by the formulas:
x
i
= x
moy
+ Φ
x
∗ b
x
(1)
g
i
= g
moy
+ Φ
g
∗ b
g
(2)
with x
i
and g
i
are respectively the synthesized
shape and texture, x
moy
and g
moy
the mean shape
and the mean texture, Φ
x
and Φ
g
the matrices of
eigenvectors of shape and texture covariance ma-
trices and b
x
and b
g
the controlling vectors of the
synthesized shape and texture.
Another PCA is then applied on several examples
of b which is the concatenation of b
x
and b
g
in order
to obtain the appearance parameter c:
b = Φ ∗ c (3)
with Φ the matrix of PCA eigenvectors. c is a vec-
tor controlling b
x
and b
g
(equation 3) at the same
time, that is to say the shape (equation 1) and tex-
ture (equation 2) of the model.
• An experience matrix creation phase in which a re-
lation between the variations of the model control
parameter (c) and the adjustments of the model in
images is created thanks to several experiences.
Indeed, each image from the training base contains
a synthesized object by a value of the parameter c.
Let us note c
0
the value of c in the image i of the
training base. By modifying the parameter c
0
by
δc (c = c
0
+ δc), we synthesize a new shape x
m
and a new texture g
m
(equation 3). Let us consider
now the texture g
i
of the original image i which
is inside the shape x
m
. The difference of pixels
δg = g
i
− g
m
and a linear regression with mul-
tiple variables on a certain number of experiments
(modification of the training base images by δc),
will give us a relation between δc and δg:
δc = R
c
∗ δg (4)
R
c
is called experiment matrix.
• A searching phase which allows the model to be
adjusted on objects in new images (using the rela-
tion found in Experience matrix creation phase).
This phase is used to search for a particular texture
and shape in new images. The modifications of the
appearance parameter c from equation 4 allow the
model on the searched object to be adjusted in new
images. The algorithm of object search in a new
image is as follows:
1- Generate g
m
and x from the c parameters (ini-
tially set to 0).
2- Calculate g
i
, the texture of the image in which is
the searched object, which is inside x shape.
3- Evaluate δg
0
= g
i
− g
m
and E
0
= |δg
0
|.
4- Predict δc
0
= R
c
∗ δg
0
.
5- Find the 1st attenuation coefficient k (among
[1.5, 0.5, 0.25, 0.125, 0.0625]) giving E
j
< E
0
,
with E
j
= |δg
j
| = |g
ij
− g
mj
| , and g
mj
is the
texture given by c
j
= c − k ∗ δc
0
and g
ij
is the tex-
ture of the image which is inside x
ij
(shape given
by c
j
).
6- While error E
j
is not stable, restart at stage 1
with c = c
j
.
When convergence of the third phase is reached, rep-
resentations of texture and shape of the searched ob-
ject are respectively synthesized through the model in
g
m
and x. Figure 1 gives an example of a face search
with the AAM method.
3 A NEW PREPROCESSING:
DISTANCE MAPS
In the proposed approach, we consider distance rela-
tions between different edges of a searched texture.
We do not directly consider colour or grey levels in
the original image, so that the approach is more ro-
bust against illumination changes.
The preprocessing of AAMs that we present here is
the transformation of the original images into distance
maps. Distance map creation associated with an orig-
inal texture image (Figure 2-A) is obtained in 4 steps
as follows.
VISAPP 2006 - IMAGE UNDERSTANDING
36