loading
Documents

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Nattawat Chanthaphan 1 ; Keiichi Uchimura 1 ; Takami Satonaka 2 and Tsuyoshi Makioka 2

Affiliations: 1 Kumamoto University, Japan ; 2 Kumamoto Prefectural College of Technology, Japan

ISBN: 978-989-758-196-0

Keyword(s): Emotion Recognition, Feature Extraction, Structured Streaming Skeleton, Depth Sensor.

Related Ontology Subjects/Areas/Topics: Design and Implementation of Signal Processing Systems ; Image and Video Processing, Compression and Segmentation ; Multidimensional Signal Processing ; Multimedia ; Multimedia Signal Processing ; Multimedia Systems and Applications ; Sensors and Multimedia ; Telecommunications

Abstract: In this paper, we are justifying the next step experiment of our novel feature extraction approach for facial expressions recognition. In our previous work, we proposed extracting the facial features from 3D facial wire-frame generated by depth camera (Kinect V.2). We introduced the facial movement streams, which were derived from the distance measurement between each pair of the nodes located on human facial wire-frame flowing through each frame of the movement. The experiment was conducted by using two classifiers, K-Nearest Neighbors (K-NN) and Support Vector Machine (SVM), with fixed values of k parameter and kernel. 15-people data set collected by our software was used for the evaluation of the system. The experiment resulted promising accuracy and performance of our approach in the last experiment. Consequently, we were anticipating to know the best parameters that would reflect the best performance of our approach. This time experiment, we try tuning the parameter values of K-N N as well as kernel of SVM. We measure both accuracy and execution time. On the one hand, K-NN overcomes all other classifiers by getting 90.33% of accuracy, but on the other hand, SVM consumes much time and gets just 67% of accuracy. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 35.175.200.4

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Chanthaphan, N.; Chanthaphan, N.; Uchimura, K.; Satonaka, T. and Makioka, T. (2016). Multiple Classifier Learning of New Facial Extraction Approach for Facial Expressions Recognition using Depth Sensor.In Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 5: SIGMAP, (ICETE 2016) ISBN 978-989-758-196-0, pages 19-27. DOI: 10.5220/0005948000190027

@conference{sigmap16,
author={Nattawat Chanthaphan. and Nattawat Chanthaphan. and Keiichi Uchimura. and Takami Satonaka. and Tsuyoshi Makioka.},
title={Multiple Classifier Learning of New Facial Extraction Approach for Facial Expressions Recognition using Depth Sensor},
booktitle={Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 5: SIGMAP, (ICETE 2016)},
year={2016},
pages={19-27},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005948000190027},
isbn={978-989-758-196-0},
}

TY - CONF

JO - Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 5: SIGMAP, (ICETE 2016)
TI - Multiple Classifier Learning of New Facial Extraction Approach for Facial Expressions Recognition using Depth Sensor
SN - 978-989-758-196-0
AU - Chanthaphan, N.
AU - Chanthaphan, N.
AU - Uchimura, K.
AU - Satonaka, T.
AU - Makioka, T.
PY - 2016
SP - 19
EP - 27
DO - 10.5220/0005948000190027

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.