Authors:
Mehdi Ghayoumi
and
Arvind K. Bansal
Affiliation:
Kent State University, United States
Keyword(s):
Emotion Recognition, Facial Expression, Image Analysis, Social Robotics.
Related
Ontology
Subjects/Areas/Topics:
Human-Machine Interface
;
Image and Video Processing, Compression and Segmentation
;
Interactive Multimedia: Games and Digital Television
;
Multimedia
;
Multimedia and Communications
;
Multimedia Signal Processing
;
Multimedia Systems and Applications
;
Multimodal Signal Processing
;
Telecommunications
Abstract:
This paper describes a new automated facial expression analysis system that integrates Locality Sensitive Hashing (LSH) with Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) to improve execution efficiency of emotion classification and continuous identification of unidentified facial expressions. Images are classified using feature-vectors on two most significant segments of face: eye segments and mouth-segment. LSH uses a family of hashing functions to map similar images in a set of collision-buckets. Taking a representative image from each cluster reduces the image space by pruning redundant similar images in the collision-buckets. The application of PCA and LDA reduces the dimension of the data-space. We describe the overall architecture and the implementation. The performance results show that the integration of LSH with PCA and LDA significantly improves computational efficiency, and improves the accuracy by reducing the frequency-bias of similar images
during PCA and SVM
stage. After the classification of image on database, we tag the collision-buckets with basic emotions, and apply LSH on new unidentified facial expressions to identify the emotions. This LSH based identification is suitable for fast continuous recognition of unidentified facial expressions.
(More)