Various Fusion Schemes to Recognize Simulated and Spontaneous Emotions

Sonia Gharsalli, Hélène Laurent, Bruno Emile, Xavier Desquesnes


This paper investigates the performance of combining geometric features and appearance features with various fusion strategies in a facial emotion recognition application. Geometric features are extracted by a distance-based method; appearance features are extracted by a set of Gabor filters. Various fusion methods are proposed from two principal classes namely early fusion and late fusion. The former combines features in the feature space, the latter fuses both feature types in the decision space by a statistical rule or a classification method. Distance-based method, Gabor method and hybrid methods are evaluated on simulated (CK+) and spontaneous (FEEDTUM) databases. The comparison between methods shows that late fusion methods have better recognition rates than the early fusion method. Moreover, late fusion methods based on statistical rules perform better than the other hybrid methods for simulated emotion recognition. However in the recognition of spontaneous emotions, the statistical-based methods improve the recognition of positive emotions, while the classification-based method slightly enhances sadness and disgust recognition. A comparison with hybrid methods from the literature is also made.


  1. Abdat, F., Maaoui, C., and Pruski, A. (2011). Humancomputer interaction using emotion recognition from facial expression. 5th UKSim European Symposium on Computer Modeling and Simulation (EMS), pages 196-201.
  2. Anderson, K. and McOwan, P. W. (2006). A real-time automated system for the recognition of human facial expressions. IEEE Transactions Systems, Man, and Cybernetics, 36(1):96-105.
  3. Atrey, K., Anwar Hossain, M., El-Saddik, A., and Kankanhalli, S.-M. (2010). Multimodal fusion for multimedia analysis: a survey. Multimedia System, pages 345- 379.
  4. Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., and Movellan, J. (2006). Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, pages 22-35.
  5. Bartlett, M.-S., Gwen, L., Ian, F., and Javier, R.-M. (2003). Real time face detection and facial expression recognition: Development and applications to human computer interaction. Computer Vision and Pattern Recognition Workshop.
  6. Bouguet, J. (2000). Pyramidal implementation of the lucas kanade feature tracker. Intel Corporation, Microprocessor Research Labs.
  7. Bradski, G., Darrell, T., Essa, I., Malik, J., Perona, P., Sclaroff, S., and Tomasi, C. (2006). http ://
  8. Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.- M., Kazemzadeh, A., S., L., Neumann, U., and Narayanan, S. (2004). Analysis of emotion recognition using facial expressions, speech and multimodal information. 6th International Conference on Multimodal Interfaces, pages 205-211.
  9. Chen, J., Chen, D., Gong, Y., Yu, M., Zhang, K., and Wang, L. (2012). Facial expression recognition using geometric and appearance features. Proceedings of the 4th International Conference on Internet Multimedia Computing and Service, pages 29-33.
  10. Fasel, I., Bartlett, M., and Movellan, J. (2002). A comparison of gabor filter methods for automatic detection of facial landmarks. 5th International Conference on automatic face and gesture recognition, pages 345-350.
  11. Gunes, H. and Piccardi, M. (2005). Affect recognition from face and body: Early fusion vs. late fusion. IEEE International Conference on Systems, Man and Cybernetics, 4:3437-3443.
  12. Kotsia, I., Buciu, I., and Pitas, I. (2008a). An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing, 26(7):1052- 1067.
  13. Kotsia, I. and Pitas, I. (2007). Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Transactions on Image Processing, 16:172-187.
  14. Kotsia, I., Zafeiriou, S., and Pitas, I. (2008b). Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognition, pages 833-851.
  15. Kuncheva, L. I. (2002). A theoretical study on six classifier fusion strategies. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 281-286.
  16. Lee, C.-J. and Wang, S.-D. (1999). Fingerprint feature extraction using Gabor filters. Electronics Letters, pages 288-290.
  17. Lucey, P., Cohn, J., Kanade, T., Saragih, J., Ambadar, Z., and Matthews (2010). The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion- specified expression. IEEE Computer Vision and Pattern Recognition Workshops, pages 94-101.
  18. Mironica, I., Ionescu, B., P., K., and Lambert, P. (2013). An in-depth evaluation of multimodal video genre categorization. 11th International workshop on contentbased multimedia indexing, pages 11-16.
  19. Movellan, J. (2005). Tutorial on gabor filters. MPLab Tutorials, UCSD MPLab, Tech.
  20. Niaz, U. and Merialdo, B. (2013). Fusion methods for multi-modal indexing of web data. 14th International Workshop Image Analysis for Multimedia Interactive Services, pages 1-4.
  21. Shan, C., Gong, S., and Mcowan, P. W. (2009). Facial expression recognition based on Local Binary Patterns : A comprehensive study. Image and Vision Computing, 27:803-816.
  22. Shi, J. and Tomasi, C. (1994). Good features to track. IEEE Computer Society Conference on Computer Vision and Pattern Recognition., pages 593-600.
  23. Snelick, R., Uludag, U., Mink, A., Indovina, M., and Jain, A. (2005). Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27:450 -455.
  24. Snoek, C. G. M. (2005). Early versus late fusion in semantic video analysis. ACM Multimedia, pages 399-402.
  25. Vinay, K. and Shreyas, B. (2006). Face recognition using gabor wavelets. 4th Asilomar Conference on Signals, Systems and Computers, pages 593-597.
  26. Viola, P. and Jones, M. (2001). Robust real-time object detection. In International Journal of Computer Vision.
  27. Vukadinovic, D. and Pantic, M. (2005). Fully automatic facial feature point detection using gabor feature based boosted classifiers. IEEE Conference of Systems, Man, and Cybernetics, pages 1692-1698.
  28. Wallhoff, F. (2006). Facial expressions and emotion database, waf/fgnet/feedtum.html.
  29. Wan, S. and Aggarwal, J. (2013). A scalable metric learning-based voting method for expression recognition. 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pages 1-8.
  30. Zeng, Z., Pantic, M., Roisman, G.-I., and Huang, T.-S. (2009). A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE transactions on pattern analysis and machine intelligence, pages 39-58.
  31. Zhang, L., Tjondronegoro, D., and Chandran, V. (2012). Discovering the best feature extraction and selection algorithms for spontaneous facial expression recognition. IEEE International Conference on Multimedia and Expo, pages 1027-1032.

Paper Citation

in Harvard Style

Gharsalli S., Laurent H., Emile B. and Desquesnes X. (2015). Various Fusion Schemes to Recognize Simulated and Spontaneous Emotions . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-090-1, pages 424-431. DOI: 10.5220/0005312804240431

in Bibtex Style

author={Sonia Gharsalli and Hélène Laurent and Bruno Emile and Xavier Desquesnes},
title={Various Fusion Schemes to Recognize Simulated and Spontaneous Emotions},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2015)},

in EndNote Style

JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2015)
TI - Various Fusion Schemes to Recognize Simulated and Spontaneous Emotions
SN - 978-989-758-090-1
AU - Gharsalli S.
AU - Laurent H.
AU - Emile B.
AU - Desquesnes X.
PY - 2015
SP - 424
EP - 431
DO - 10.5220/0005312804240431