Authors:
Marco Anisetti
1
;
Valerio Bellandi
1
;
Luigi Arnone
2
and
Fabrizio Beverina
2
Affiliations:
1
University of Milan, Italy
;
2
STMicroelectronics - Advanced System Technology Group, Italy
Keyword(s):
Face tracking, expression changes, FACS, illumination changes.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Human-Computer Interaction
;
Image and Video Analysis
;
Methodologies and Methods
;
Model-Based Object Tracking in Image Sequences
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Physiological Computing Systems
;
Software Engineering
;
Tracking of People and Surveillance
;
Video Analysis
Abstract:
Considering the face as an object that moves through a scene, the posture related to the camera’s point of view and the texture both may change the aspect of the object considerably. These changes are tightly coupled with the alterations in illumination conditions when the subject moves or even when some modifications happen in illumination conditions (light switched on or off etc.). This paper presents a method for tracking a face on a video sequence by recovering the full-motion and the expression deformations of the head using 3D expressive head model. Taking advantage from a 3D triangle based face model, we are able to deal with any kind of illumination changes and face expression movements. In this parametric model, any changes can be defined as a linear combination of a set of weighted basis that could easily be included in a minimization algorithm using a classical Newton optimization approach. The 3D model of the face is created using some characteristical face points given o
n the first frame. Using a gradient descent approach, the algorithm is able to extract simultaneously the parameters related to the face expression, the 3D posture and the virtual illumination conditions. The algorithm has been tested on Kanade-Cohn database (Kanade et al., 2000) for expression estimation and its precision has been compared with a standard multi-camera system for the 3D tracking (Elite2002 System) (Ferrigno and Pedotti, 1985). Regarding illumination tests, we use synthetic movie created using standard 3D-mesh animation tools and real experimental videos created in very extreme illumination condition. The results in all the cases are promising even with great head movements and changes in the expression and the illumination conditions. The proposed approach has a twofold application as a part of a facial expression analysis system and preprocessing for identification systems (expression, pose and illumination normalization).
(More)