Authors:
J. A. Castro-Vargas
1
;
B. S. Zapata-Impata
1
;
P. Gil
2
;
J. A. Garcia-Rodriguez
3
and
F. Torres
2
Affiliations:
1
Dept. of Physics, Systems Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante and Spain
;
2
Dept. of Physics, Systems Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante, Spain, Computer Science Research Institute, University of Alicante, San Vicente del Raspeig, Alicante and Spain
;
3
Dept. of Computer Technology, University of Alicante, San Vicente del Raspeig, Alicante, Spain, Computer Science Research Institute, University of Alicante, San Vicente del Raspeig, Alicante and Spain
Keyword(s):
Gesture Recognition from Video, 3D Convolutional Neural Network.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Pattern Recognition
;
Robotics
;
Software Engineering
Abstract:
In the past, methods for hand sign recognition have been successfully tested in Human Robot Interaction (HRI) using traditional methodologies based on static image features and machine learning. However, the recognition of gestures in video sequences is a problem still open, because current detection methods achieve low scores when the background is undefined or in unstructured scenarios. Deep learning techniques are being applied to approach a solution for this problem in recent years. In this paper, we present a study in which we analyse the performance of a 3DCNN architecture for hand gesture recognition in an unstructured scenario. The system yields a score of 73% in both accuracy and F1. The aim of the work is the implementation of a system for commanding robots with gestures recorded by video in real scenarios.