Authors:
Avgi Kollakidou
1
;
Frederik Haarslev
1
;
Cagatay Odabasi
2
;
Leon Bodenhagen
1
and
Norbert Krüger
1
Affiliations:
1
SDU Robotics, University of Southern Denmark, Campusvej 55, Odense C, Denmark
;
2
Fraunhofer IPA, Nobelstraße 12, Stuttgart, Germany
Keyword(s):
Action Recognition, Gesture Recognition, Human-Robot Interaction.
Abstract:
Most of people’s communication happens through body language and gestures. Gesture recognition in human-robot interaction is an unsolved problem which limits the possible communication between humans and robots in today’s applications. Gesture recognition can be considered as the same problem as action recognition which is largely solved by deep learning, however, current publicly available datasets do not contain many classes relevant to human-robot interaction. In order to address the problem, a human-robot interaction gesture dataset is therefore required. In this paper, we introduce HRI-Gestures, which includes 13600 instances of RGB and depth image sequences, and joint position files. A state of the art action recognition network is trained on relevant subsets of the dataset and achieve upwards of 96.9% accuracy. However, as the network is designed for the large-scale NTU RGB+D dataset, subpar performance is achieved on the full HRI-Gestures dataset. Further enhancement of gestu
re recognition is possible by tailored algorithms or extension of the dataset.
(More)