Real-time Emotion Recognition - Novel Method for Geometrical Facial Features Extraction

Claudio Loconsole, Catarina Runa Miranda, Gustavo Augusto, Antonio Frisoli, Verónica Costa Orvalho

Abstract

Facial emotions provide an essential source of information commonly used in human communication. For humans, their recognition is automatic and is done exploiting the real-time variations of facial features. However, the replication of this natural process using computer vision systems is still a challenge, since automation and real-time system requirements are compromised in order to achieve an accurate emotion detection. In this work, we propose and validate a novel methodology for facial features extraction to automatically recognize facial emotions, achieving an accurate degree of detection. This methodology uses a real-time face tracker output to define and extract two new types of features: eccentricity and linear features. Then, the features are used to train a machine learning classifier. As result, we obtain a processing pipeline that allows classification of the six basic Ekman’s emotions (plus Contemptuous and Neutral) in real-time, not requiring any manual intervention or prior information of facial traits.

References

  1. Amari, S. and Wu, S. (1999). Improving support vector machine classifiers by modifying kernel functions. Neural Networks, 12(6):783-789.
  2. Asthana, A., Saragih, J., Wagner, M., and Goecke, R. (2009). Evaluating aam fitting methods for facial expression recognition. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages 1-8. IEEE.
  3. Fernandes, T., Miranda, J., Alvarez, X., and Orvalho, V. (2011). LIFEisGAME - An Interactive Serious Game for Teaching Facial Expression Recognition. Interfaces, pages 1-2.
  4. Fischer, R. (2004). Automatic Facial Expression Analysis and Emotional Classification by. October.
  5. Gang, L., Xiao-hua, L., Ji-liu, Z., and Xiao-gang, G. (2009). Geometric feature based facial expression recognition using multiclass support vector machines. In Granular Computing, 2009, GRC 7809. IEEE International Conference on, pages 318 -321.
  6. Hammal, Z., Couvreur, L., Caplier, A., and Rombaut, M. (2007). Facial expression classification: An approach based on the fusion of facial deformations using the transferable belief model. International Journal of Approximate Reasoning, 46(3):542 - 567. <ce:title>Special Section: Aggregation Operators</ce:title>.
  7. Hong, J., Han, M., Song, K., and Chang, F. (2007). A fast learning algorithm for robotic emotion recognition. In Computational Intelligence in Robotics and Automation, 2007. CIRA 2007. International Symposium on, pages 25-30. Ieee.
  8. Jamshidnezhad, A. and Nordin, M. (2012). Challenging of facial expressions classification systems: Survey, critical considerations and direction of future work. Research Journal of Applied Sciences, 4.
  9. Kapoor, A., Qi, Y., and Picard, R. W. (2003). Fully automatic upper facial action recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures, AMFG 7803, pages 195-, Washington, DC, USA. IEEE Computer Society.
  10. Ko, K. and Sim, K. (2010). Development of a facial emotion recognition method based on combining aam with dbn. In Cyberworlds (CW), 2010 International Conference on, pages 87-91. IEEE.
  11. Kotsia, I., Buciu, I., and Pitas, I. (2008). An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing, 26(7):1052 - 1067.
  12. Kotsia, I. and Pitas, I. (2007). Facial expression recognition in image sequences using geometric deformation features and support vector machines. Image Processing, IEEE Transactions on, 16(1):172 -187.
  13. Luximon, Y., Ball, R., and Justice, L. (2011). The 3d chinese head and face modeling. Computer-Aided Design.
  14. Michel, P. and El Kaliouby, R. (2003). Real time facial expression recognition in video using support vector machines. In Proceedings of the 5th international conference on Multimodal interfaces, pages 258-264. ACM.
  15. Niese, R., Al-Hamadi, A., Farag, A., Neumann, H., and Michaelis, B. (2012). Facial expression recognition based on geometric and optical flow features in colour image sequences. Computer Vision, IET, 6(2):79 -89.
  16. Pardàs, M. and Bonafonte, A. (2002). Facial animation parameters extraction and expression recognition using hidden markov models. Signal Processing: Image Communication, 17(9):675-688.
  17. Rodriguez, J., Perez, A., and Lozano, J. (2010). Sensitivity analysis of k-fold cross validation in prediction error estimation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(3):569-575.
  18. Saragih, J., Lucey, S., and Cohn, J. (2011a). Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision, pages 1-16.
  19. Saragih, J., Lucey, S., and Cohn, J. (2011b). Real-time avatar animation from a single image. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 117-124. IEEE.
  20. Seyedarabi, H., Aghagolzadeh, A., and Khanmohammadi, S. (2004). Recognition of six basic facial expressions by feature-points tracking using rbf neural network and fuzzy inference system. In Multimedia and Expo, 2004. ICME 7804. 2004 IEEE International Conference on, volume 2, pages 1219 -1222 Vol.2.
  21. Shan, C., Gong, S., and McOwan, P. (2009). Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing, 27(6):803-816.
  22. Wang, J. and Yin, L. (2007). Static topographic modeling for facial expression recognition and analysis. Computer Vision and Image Understanding, 108(1-2):19- 34.
  23. Youssif, A. A. A. and Asker, W. A. A. (2011). Automatic facial expression recognition system based on geometric and appearance features. Computer and Information Science, pages 115-124.
  24. Zeng, Z., Pantic, M., Roisman, G., and Huang, T. (2009). A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(1):39-58.
  25. Zhang, L., Tjondronegoro, D., and Chandran, V. (2012). Discovering the best feature extraction and selection algorithms for spontaneous facial expression recognition. 2012 IEEE International Conference on Multimedia and Expo.
Download


Paper Citation


in Harvard Style

Loconsole C., Runa Miranda C., Augusto G., Frisoli A. and Costa Orvalho V. (2014). Real-time Emotion Recognition - Novel Method for Geometrical Facial Features Extraction . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-003-1, pages 378-385. DOI: 10.5220/0004738903780385


in Bibtex Style

@conference{visapp14,
author={Claudio Loconsole and Catarina Runa Miranda and Gustavo Augusto and Antonio Frisoli and Verónica Costa Orvalho},
title={Real-time Emotion Recognition - Novel Method for Geometrical Facial Features Extraction},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={378-385},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004738903780385},
isbn={978-989-758-003-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014)
TI - Real-time Emotion Recognition - Novel Method for Geometrical Facial Features Extraction
SN - 978-989-758-003-1
AU - Loconsole C.
AU - Runa Miranda C.
AU - Augusto G.
AU - Frisoli A.
AU - Costa Orvalho V.
PY - 2014
SP - 378
EP - 385
DO - 10.5220/0004738903780385