Figure 5: Animation Parameters for different sequences
against the ground truth.
tem rather than achieving real time performance. The
proposed system was first evaluated for its robustness
on standard head pose estimation datasets. The mean
absolute errors of yaw, pitch and roll were found to
be comparable and in some cases better than the re-
sults reported in literature. The proposed system was
next tested on a standard facial expression dataset
which largely involved movements of eyebrows and
mouth. Experimental results show that the proposed
algorithm is able to effectively handle the mouth and
brow movements. We manually collected the ground
truth for several facial expression test sequences to
evaluate the algorithm quantitatively. The estimated
animation parameters were found to agree very well
with the ground truth.
REFERENCES
Aggarwal, G., Veeraraghavan, A., and Chellappa, R.
(2005). 3d Facial pose tracking in Uncalibrated
videos. PRMI, pages 515–520.
Ahlberg, J. (2001). Candide-3–an updated parametrized
face. Report No. LiTH-ISY.
Bradley, C. (2007). The Algebra of Geometry: Cartesian,
Areal and Projective Co-ordinates. Highperception
Ltd., Bath.
Brox, T., Rosenhahn, B., Gall, J., and Cremers, D. (2010).
Combined region and motion-based 3D tracking of
rigid and articulated objects. PAMI, 32(3):402.
Choi, S. and Kim, D. (2008). Robust head tracking using
3D ellipsoidal head model in particle filter. Pattern
Recognition, 41(9):2901–2915.
De Berg, M., Cheong, O., Van Kreveld, M., and Overmars,
M. (2008). Computational geometry: Algorithms and
applications. Springer.
DeMenthon, D. and Davis, L. (1995). Model-based object
pose in 25 lines of code. IJCV, 15(1):123–141.
Dornaika, F. and Ahlberg, J. (2004). Face and facial feature
tracking using deformable models. IJIG, 4(3):499.
Dornaika, F. and Ahlberg, J. (2006). Fitting 3D face mod-
els for tracking and active appearance model training.
Image and Vision Computing, 24(9):1010–1024.
Edelsbrunner, H. (2001). Geometry and topology for mesh
generation. Cambridge Univ. Press.
Ekman, P. and Friesen, W. (1977). Facial Action Coding
System. Consulting Psychology Press.
Fischler, M. and Bolles, R. (1981). Random sample consen-
sus: A paradigm for model fitting with applications to
image analysis and automated cartography. Commu-
nications of the ACM, 24(6):381–395.
Group, M. U. (2007). The MUG Facial Expression
Database. http://mug.ee.auth.gr/fed/.
Jang, J. and Kanade, T. (2008). Robust 3D head tracking by
online feature registration. In 8th IEEE International
Conference on Automatic Face and Gesture Recogni-
tion.
La Cascia, M., Sclaroff, S., and Athitsos, V. (2000). Fast,
reliable head tracking under varying illumination: an
approach based on registration of texture-mapped 3 D
models. PAMI, 22(4):322–336.
Levenberg, K. (1944). A method for the solution of certain
nonlinear problems in least-squares. The Quarterly of
Applied Mathematics, 2:164–168.
Lowe, D. (2004). Distinctive image features from scale-
invariant keypoints. IJCV, 60(2):91–110.
Maronna, R., Martin, R., and Yohai, V. (2006). Robust
statistics. Wiley New York.
Marquardt, D. (1970). Generalized inverses, ridge regres-
sion, biased linear estimation, and nonlinear estima-
tion. Technometrics, 12(3):591–612.
Pawan, P. and Aravind, R. (2010). A Robust Head Pose Es-
timation System in Uncalibrated Monocular Videos.
In Indian Conference on Computer Vision Graphics
and Image Processing. ACM.
Terissi, L., G
´
omez, J., CIFASIS, C., and Rosario, A. (2010).
3D Head Pose and Facial Expression Tracking using
a Single Camera. Journal of Universal Computer Sci-
ence, 16(6):903–920.
Vatahska, T., Bennewitz, M., and Behnke, S. (2009).
Feature-based head pose estimation from images.
In 7th IEEE-RAS International Conference on Hu-
manoid Robots, pages 330–335. IEEE.
Xiao, J., Moriyama, T., Kanade, T., and Cohn, J. (2003).
Robust full-motion recovery of head by dynamic
templates and re-registration techniques. Interna-
tional Journal of Imaging Systems and Technology,
13(1):85–94.
FACIAL POSE AND ACTION TRACKING USING SIFT
619