An Integrated Approach for Efficient Analysis of Facial Expressions

Mehdi Ghayoumi, Arvind K. Bansal

2014

Abstract

This paper describes a new automated facial expression analysis system that integrates Locality Sensitive Hashing (LSH) with Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) to improve execution efficiency of emotion classification and continuous identification of unidentified facial expressions. Images are classified using feature-vectors on two most significant segments of face: eye segments and mouth-segment. LSH uses a family of hashing functions to map similar images in a set of collision-buckets. Taking a representative image from each cluster reduces the image space by pruning redundant similar images in the collision-buckets. The application of PCA and LDA reduces the dimension of the data-space. We describe the overall architecture and the implementation. The performance results show that the integration of LSH with PCA and LDA significantly improves computational efficiency, and improves the accuracy by reducing the frequency-bias of similar images during PCA and SVM stage. After the classification of image on database, we tag the collision-buckets with basic emotions, and apply LSH on new unidentified facial expressions to identify the emotions. This LSH based identification is suitable for fast continuous recognition of unidentified facial expressions.

References

  1. Abrishami, M. H., Ghayoumi, M., 2006. “Facial Image Feature Extraction using Support Vector Machine,” International Conference of Vision Theory and Applications, Lisbon, Portugal, pp. 480 - 485.
  2. Andoni, A., Indyk, P., 2008. “Near Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions,” Communications of the ACM, Vol. 51, No. 1, pp. 117 - 122.
  3. Andoni, A., Indyk, P., 2004. "E2lsh: Exact Euclidean locality-sensitive hashing," implementation available at http://web.mit.edu/andoni/www/LSH/index.html.
  4. Beigzadeh, M., Vafadoost, M., 2008. "Detection of Face and Facial Features in digital Images and Video Frames," Proc. of the Cairo International Biomedical Engineering Conf. (CIBEC'08), Cairo, Egypt.
  5. Cambria; E., Livingstone, A., Hussain A., 2012. "The Hourglass of Emotions". Cognitive Behavioural Systems. Springer, pp. 144 - 157.
  6. Cevikalp, H., Yavuz, H. S., Edizkan, R., Gunduz, H., Kandemir, C. M., 2011. “Comparisons of features for automatic eye and mouth localization,” Thirty First SGAI International Conference on Innovations in Intelligent Systems and Applications (INISTA), Cambridge, UK, pp. 576 - 580.
  7. Chang, C. C., Lin, J., 2001, “LIBSVM: a Library for Support Vector Machines”, http://www.csie.ntu. edu.tw/ cjlin/libsvm/.
  8. Colombetti, G., 2009. "From affect programs to dynamical discrete emotions". Philosophical Psychology 22: 407- 425.
  9. Cristianini, N., Shawe-Taylor, J., 2000. An Introduction to Support Vector Machines, Cambridge University Press, UK.
  10. Crowder, J. A., Carbone, J. N., Friess, S. A., 2014. Artificial Cognition Architectures, Springer Science + Business Media, New York.
  11. Cruz, A., Bhanu, B., Thakoor, N., 2012. "Facial emotion recognition in continuous video," Proc. of the Int'l. Conf. Pattern Recognition, Tsukuba, Ibaraki, Japan, pp. 2625-2628
  12. Cruz, A., Bhanu, B., Thakoor, N., 2014. "Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition," IEEE Transactions of Affective Computing, to appear, pre-print available at http://www.ee.ucr.edu/acruz/j1-preprint.pdf
  13. Dahmane, M. and Meunier, J., 2011. “Continuous emotion recognition using Gabor energy filters,” ACII'11 Proc. of the 4th international conference on Affective computing and intelligent interaction - Volume Part II, Springer-Verlag, Berlin / Heidelberg, pp. 351 - 358.
  14. Dantone, M., Gall, J., Fanelli, G, Gool, L. V., 2012. "Real Time Facial Feature Detection using Conditional Regression Forests, IEEE Conf. on Computer Vision and Pattern Recognition, RI, USA, pp. 2578-2585.
  15. Dubuisson, S., Davoine, F., Cocquerez, J., 2001. "Automatic Facial Feature Extraction and Facial Expression Recognition," 3rd International Conf. of Audio and Video-Based Biometric Person Authentication, Halmstad, Sweden, Springer, pp. 121- 126.
  16. Ekman, P., Friesen, W.V., 1978. The Facial Action Coding System, Consulting Psychologists Press, Palo Alto, CA. Ekman, P., 1993, “Facial expression and emotion”, American Psychologist, Vol. 48, Issue 4, pp. 384 - 392. Fellous, J. M., and Arbib, M. A., 2005. Who Needs Emotions? The Brain Meets the Robot, Oxford University press, 2005.
  17. Gendron, M., Lisa, F., 2009. "Reconstructing the Past: A Century of Ideas about Emotion in Psychology," Emotion Review 1 (4): 316-339.
  18. Hamm, J., Kohler, C. G., Gur, R. C., Verma, R., 2011. “Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders,” Journal of Neuroscience Methods, Vol. 200, Issue 2, pp. 237 - 256.
  19. Jain, P., Kulis, B., Grauman, K., 2008. Fast Image Search for Learned Metrics. In CVPR.
  20. Jianke, L., Baojun, Z., Zhang, H., Jiao, J., 2009. ”Face Recognition System Using SVM Classifier and Feature Extraction by PCA and LDA Combination,” International Conference on Digital Object, San Francisco, CA, USA, pp.1 - 4.
  21. Kanade, T., Cohn, J., Tian, Y., 2000. “Comprehensive database for facial expression analysis,” Fifth IEEE International Conf. on Automatic Face and Gesture Recognition, Grenoble, France, pp 46 - 53.
  22. Kobayasho, S., Hashimoto, S., 1995. “Automated feature extraction of face image and its applications,” International workshop on Robot and Human Communication, Tokyo, Japan, pp. 164 - 169.
  23. Lee, C. M., Narayanan, S. S., 2005. "Towards Detecting Emotions in Spoken Dialog," IEEE Trans. on Speech and Audio Processing, Vol. 13, No. 2, pp. 293-303.
  24. Li, J., Zhao, B., Zhang, H., Jiao J., 2009. "Dual-Space Skin-Color Cue Based Face Detection for Eye Location," International Conference on Information Engineering and Computer Science (ICIECS 2009), pp.1 - 4.
  25. Li, J., Zhao, B., Zhang, H., Jiao J., 2009. "Face Recognition System Using SVM Classifier and Feature Extraction by PCA and LDA Combination," International Conf. on Computational Intelligence and Software Engineering (CiSE 2009), pp. 1 - 4.
  26. Li, R., Hu, M., Wang, X., Xu, L., Huang, Z., Chen, X., 2012. “Adaptive Facial Expression Recognition Based on a Weighted Component and Global Features,” Fourth International Conf. on Digital Object, Guanzhou, China, pp. 69 - 73.
  27. Pagariya, R. R., Bartere, M. M, 2013. "Facial Emotion Recognition in Videos Using HMM," International Journal of Computational Engineering Research, Vol, 03, Issue, 4, Version 3, pp. 111-118.
  28. Pandzic, I. S., Forchheimer, R., 2002. MPEG-4 Face Animation, the Standard, Implementation and Applications, Wiley, pp. 17-55.
  29. Perakyla, A., Ruusuvuori, J., 2012. “Facial Expression and Interactional Regulation of Emotion,” Emotion in Interaction, Oxford University Press, pp. 64 - 91.
  30. Plutchik, R., 2001. "The Nature of Emotions". American Scientist, Vol. 89, pp. 344 - 350.
  31. Shakhnarovich, G., Darrell, T., Indyk, P., editors. NearestNeighbor Methods in Learning and Vision: Theory and Practice. The MIT Press, 2006.
  32. Tian, Y. L., Kanade, T., Cohn, J. F., 2005. Facial Expression Analysis, Handbook of Face Recognition, pp. 247-275.
  33. Yi-Bin, S., Jian-Ming, Z., Jian-Hua, T., Geng-Tao, Z., 2006. “An Improved facial feature localization method based on ASM," Proc. of the 7th International Conference on Computer Aided Industrial design and Conceptual design, Hangzhou, China, pp. 1-5.
Download


Paper Citation


in Harvard Style

Ghayoumi M. and Bansal A. (2014). An Integrated Approach for Efficient Analysis of Facial Expressions . In Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014) ISBN 978-989-758-046-8, pages 211-219. DOI: 10.5220/0005116702110219


in Bibtex Style

@conference{sigmap14,
author={Mehdi Ghayoumi and Arvind K. Bansal},
title={An Integrated Approach for Efficient Analysis of Facial Expressions},
booktitle={Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014)},
year={2014},
pages={211-219},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005116702110219},
isbn={978-989-758-046-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014)
TI - An Integrated Approach for Efficient Analysis of Facial Expressions
SN - 978-989-758-046-8
AU - Ghayoumi M.
AU - Bansal A.
PY - 2014
SP - 211
EP - 219
DO - 10.5220/0005116702110219