Authors:
Nattawat Chanthaphan
1
;
Keiichi Uchimura
1
;
Takami Satonaka
2
and
Tsuyoshi Makioka
2
Affiliations:
1
Kumamoto University, Japan
;
2
Kumamoto Prefectural College of Technology, Japan
Keyword(s):
Emotion Recognition, Feature Extraction, Structured Streaming Skeleton, Depth Sensor.
Related
Ontology
Subjects/Areas/Topics:
Design and Implementation of Signal Processing Systems
;
Image and Video Processing, Compression and Segmentation
;
Multidimensional Signal Processing
;
Multimedia
;
Multimedia Signal Processing
;
Multimedia Systems and Applications
;
Sensors and Multimedia
;
Telecommunications
Abstract:
In this paper, we are justifying the next step experiment of our novel feature extraction approach for facial expressions recognition. In our previous work, we proposed extracting the facial features from 3D facial wire-frame generated by depth camera (Kinect V.2). We introduced the facial movement streams, which were derived from the distance measurement between each pair of the nodes located on human facial wire-frame flowing through each frame of the movement. The experiment was conducted by using two classifiers, K-Nearest Neighbors (K-NN) and Support Vector Machine (SVM), with fixed values of k parameter and kernel. 15-people data set collected by our software was used for the evaluation of the system. The experiment resulted promising accuracy and performance of our approach in the last experiment. Consequently, we were anticipating to know the best parameters that would reflect the best performance of our approach. This time experiment, we try tuning the parameter values of K-
NN as well as kernel of SVM. We measure both accuracy and execution time. On the one hand, K-NN overcomes all other classifiers by getting 90.33% of accuracy, but on the other hand, SVM consumes much time and gets just 67% of accuracy.
(More)