the number of samples and the visual and temporal
complexity of the generated novel dataset, the per-
formance of the presented system is close to that of
(Molchanov et al., 2015), in which they achieved
77.5% using a visually segmented dataset for the
recognition of hand gestures.
Taking into account the present results, the pro-
posed method could stand as a starting point for a fu-
ture work in the field of human-robot interaction with
gestures. In a real world scenario, an adaptation phase
of the user to the constrains of our system would be
required. Also, a high level configuration of the neu-
ral network could be carried out to increase the confi-
dence of the outputs. As a benefit, during online ex-
ecution, the system would have several opportunities
to identify the same gesture as the temporal window
moves.
In the future, we plan to test the possibility of us-
ing an already trained network in a similar visual task
and transfer its features to our problem. In addition,
checking the performance of a mixed architecture us-
ing ConvLSMT and 3DCNN would be an interesting
experiment. Finally, in order to generate more sam-
ples we would like to experiment with methods that
produce synthetic data realistic enough for learning.
ACKNOWLEDGEMENTS
This work was funded by the Ministry of Economy,
Industry and Competitiveness from the Spanish Gov-
ernment through the DPI2015-68087-R and the pre-
doctoral grant BES-2016-078290, by the European
Commission and FEDER funds through the project
COMMANDIA (SOE2/P1/F0638), action supported
by Interreg-V Sudoe.
REFERENCES
Abid, M. R., Meszaros, P. E., Silva, R. F., and Petriu,
E. M. (2014). Dynamic hand gesture recognition for
human-robot and inter-robot communication. In IEEE
Conference on Computational Intelligence and Vir-
tual Environments for Measurement Systems and Ap-
plications, pages 12–17.
Barros, P., Parisi, G. I., Jirak, D., and Wermter, S. (2014).
Real-time Gesture Recognition Using a Humanoid
Robot with a Deep Neural Architecture.
Chen, K.-Y., Chien, C.-C., Chang, W.-L., and Teng, J.-T.
(2010). An integrated color and hand gesture recog-
nition approach for an autonomous mobile robot. In
Image and Signal Processing (CISP), 2010 3rd Inter-
national Congress on, volume 5, pages 2496–2500.
IEEE.
Giusti, A., Cires¸an, D. C., Masci, J., Gambardella, L. M.,
and Schmidhuber, J. (2013). Fast image scan-
ning with deep max-pooling convolutional neural net-
works. In International Conference on Image Process-
ing (ICIP), pages 4034–4038. IEEE.
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep
Learning. MIT Press.
Ionescu, B., Coquin, D., Lambert, P., and Buzuloiu, V.
(2005). Dynamic hand gesture recognition using the
skeleton of the hand. Eurasip Journal on Applied Sig-
nal Processing, 2005(13):2101–2109.
Kohavi, R. (1995). A study of cross-validation and boot-
strap for accuracy estimation and model selection. In
14th International Joint Conference on Artificial In-
telligence, volume 2, pages 1137–1143.
Luo, R. C. and Wu, Y. C. (2012). Hand gesture recogni-
tion for Human-Robot Interaction for service robot.
In IEEE International Conference on Multisensor Fu-
sion and Integration for Intelligent Systems, pages
318–323. IEEE.
Malima, A.,
¨
Ozg
¨
ur, E., and C¸ etin, M. (2006). A fast al-
gorithm for vision-based hand gesture recognition for
robot control. In IEEE 14th Signal Processing and
Communications Applications Conference, pages 6–
9. IEEE.
Molchanov, P., Gupta, S., Kim, K., Kautz, J., and Clara,
S. (2015). Hand Gesture Recognition with 3D Con-
volutional Neural Networks. In Computer Vision and
Pattern Recognition (CVPR), pages 1–7.
Powers, D. M. (2011). Evaluation: from precision, recall
and f-measure to roc, informedness, markedness and
correlation.
Schmidhuber, J. (2015). Deep learning in neural networks:
An overview. Neural Networks, 61:85 – 117.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. (2014). Dropout: A Sim-
ple Way to Prevent Neural Networks from Over-
fitting. Journal of Machine Learning Research,
2014(15):1929–1958.
Strobel, M., Illmann, J., Kluge, B., and Marrone, F. (2002).
Using spatial context knowledge in gesture recogni-
tion for commanding a domestic service robot. In
IEEE International Workshop on Robot and Human
Interactive Communication, pages 468–473.
Tsironi, E., Barros, P., and Wermter, S. (2016). Ges-
ture recognition with a convolutional long short-term
memory recurrent neural network. Bruges, Belgium,
2.
ICPRAM 2019 - 8th International Conference on Pattern Recognition Applications and Methods
806