HOLISTIC AND FEATURE-BASED INFORMATION TOWARDS DYNAMIC MULTI-EXPRESSIONS RECOGNITION

Zakia Hammal, Corentin Massot

Abstract

Holistic and feature-based processing have both been shown to be involved differently in the analysis of facial expression by human observer. The current paper proposes a novel method based on the combination of both approaches for the segmentation of “emotional segments” and the dynamic recognition of the corresponding facial expressions. The proposed model is a new advancement of a previously proposed feature-based model for static facial expression recognition (Hammal et al., 2007). First, a new spatial filtering method is introduced for the holistic processing of the face towards the automatic segmentation of “emotional segments”. Secondly, the new filtering-based method is applied as a feature-based processing for the automatic and precise segmentation of the transient facial features and estimation of their orientation. Third, a dynamic and progressive fusion process of the permanent and transient facial feature deformations is made inside each “emotional segment” for a temporal recognition of the corresponding facial expression. Experimental results show the robustness of the holistic and feature-based analysis, notably for the analysis of multi-expression sequences. Moreover compared to the static facial expression classification, the obtained performances increase by 12% and compare favorably to human observers’ performances.

References

  1. Hammal Z., Couvreur L., Caplier A., Rombaut M., 2007. Facial expressions classification: A new approach based on transferable belief model. International Journal of Approximate Reasoning, 46(3), 542-567.
  2. Hammal Z., Eveno N., Caplier A., Coulon, P-Y., 2006. Parametric models for facial features segmentation, Signal processing, 86, 399-413.
  3. Smith M., Cottrell G., Gosselin F. Schyns P.G, 2005. Transmitting and decoding facial expressions of emotions, Psychological Science, 16, 184-189.
  4. Pantic M., Patras I., 2006. Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments from Face Profile Image Sequences, IEEE Trans. SMC- Part B, 36(2), 433-449.
  5. Smets P., Kruse R., 1994. The transferable belief model, Artificial Intelligence, 66, 191-234.
  6. Tian Y., Kanade T., Cohn J.F., 2001. Recognizing Action Units for Facial Expression Analysis, IEEE Trans. PAMI, 23(2), 97-115.
  7. Massot C., Herault J., 2008.Model of Frequency Analysis in the Visual Cortex and the Shape from Texture Problem, Int. Journal of Computer Vision, 76(2).
  8. Denoeux T., 2008. Conjunctive and disjunctive combination of belief functions induced by nondistinct bodies of evidence, Artificial Intelligence, 172:234-264.
  9. William Beaudot, 1994. Le traitement neuronal de l'information dans la rétine des vertébrés : Un creuset d'idées pour la vision artificielle, Thèse de Doctorat INPG, Laboratoire TIRF, Grenoble (France).
  10. Tian Y. L., Kanade T., Cohn J.F., 2005. Facial expression analysis, In S” Z. Li & A.K. Jain (Eds), Handboock of face recognition, 247-276. NY: Springer.
  11. Littlewort G., Bartlett M. S., Fasel I., Susskind, J. & Movellan J. 2006 Dynamics of facial expression extracted automatically from video. J. Image Vis. Comput. 24, 615-625.
  12. M.F. Valstar and M. Pantic, 2007. Combined Support Vector Machines and Hidden Markov Models for Modeling Facial Action Temporal Dynamics, in Proc. IEEE Workshop on Human Computer Interaction, Rio de Janeiro, Brazil, 118-127.
  13. Zhang Y., & Qiang, J. 2005. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Trans. PAMI, 27(5), 699-714.
  14. Gralewski L., Campbell N. & Voak I. P. 2006 Using a tensor framework for the analysis of facial dynamics. Proc. IEEE Int. Conf. FG, 217-222.
  15. Tong Y., Liao W. & Ji Q. 2007 Facial action unit recognition by exploiting their dynamics and semantic relationships. IEEE Trans. PAMI. 29, 1683-1699.
  16. Pantic M., Valstar M.F., Rademaker R. & Maat L. 2005 Web-based database for facial expression analysis. Proc. IEEE Int. Conf. ICME'05, Amsterdam, The Netherlands, July.
Download


Paper Citation


in Harvard Style

Hammal Z. and Massot C. (2010). HOLISTIC AND FEATURE-BASED INFORMATION TOWARDS DYNAMIC MULTI-EXPRESSIONS RECOGNITION . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2010) ISBN 978-989-674-029-0, pages 300-309. DOI: 10.5220/0002837503000309


in Bibtex Style

@conference{visapp10,
author={Zakia Hammal and Corentin Massot},
title={HOLISTIC AND FEATURE-BASED INFORMATION TOWARDS DYNAMIC MULTI-EXPRESSIONS RECOGNITION},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2010)},
year={2010},
pages={300-309},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002837503000309},
isbn={978-989-674-029-0},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2010)
TI - HOLISTIC AND FEATURE-BASED INFORMATION TOWARDS DYNAMIC MULTI-EXPRESSIONS RECOGNITION
SN - 978-989-674-029-0
AU - Hammal Z.
AU - Massot C.
PY - 2010
SP - 300
EP - 309
DO - 10.5220/0002837503000309