However, despite the fact that using 3D sensing
thoroughly simplifies hand detection and tracking,
these devices present occlusion problems when fin-
gertips are perpendicular to the sensor. Careful de-
signing of gestures may avoid or reduce this problem.
Range distance limits become also a problem when
hands are too far from the sensor, i.e. depth informa-
tion is lost.
Future development and testing is described next:
First, Two hands interaction support. The planned
approach is to divide the camera feed into two areas,
one for each hand, then perform recognition on these
areas. Second, the current implementation uses only
the GPU for hand detection and segmentation, per-
formance may improve if all stages are executed on
the GPU. Also the implementation of a basic script-
ing tool to define new gestures will help in order to
avoid the modification of the system internals. Fi-
nally, The described gestures in section 3.3 were de-
fined for test purposes only, without having usability
in mind. However, more gestures have to be speci-
fied, and formal evaluation needs to be performed in
a group of users in order to know what kind of ges-
tures are usable and suitable for interaction in virtual
environments.
REFERENCES
Argyros, A. A. and Lourakis, M. I. A. (2006). Vision-based
interpretation of hand gestures for remote control of
a computer mouse. In Proceedings of the 2006 inter-
national conference on Computer Vision in Human-
Computer Interaction. Springer-Verlag.
DeFanti, T. and Sandin, D. (1977). Final report to the na-
tional endowment of the arts. In US NEA r60-34-163.
Frati, V. and Prattichizzo, D. (2011). Using kinect for hand
tracking and rendering in wearable haptics. In World
Haptics Conference (WHC), 2011 IEEE. IEEE Com-
puter Society Press.
Garg, P., Aggarwal, N., and Sofat, S. (2009). Vision
based hand gesture recognition. In 2009 Fifth Interna-
tional Conference on Intelligent Information Hiding
and Multimedia Signal Processing. IEEE Computer
Society Press.
Greiner, T. (1991). Hand Anthropometry of U.S. Army Per-
sonnel. Technical report (U.S. Army Natic Labora-
tories. Development Engineering Center). U.S. Army
Natick Research, Development & Engineering Center.
Iason Oikonomidis, N. K. and Argyros, A. (2011). Efficient
model-based 3d tracking of hand articulations using
kinect. In Proceedings of the British Machine Vision
Conference. BMVA Press.
Iddan, G. J. and Yahav, G. (2001). G.: 3d imaging in the
studio (and elsewhere). In: SPIE.
Liang, H., Yuan, J., and Thalmann, D. (2012). 3d fingertip
and palm tracking in depth image sequences. In Pro-
ceedings of the 20th ACM international conference on
Multimedia. ACM.
Liu, X. and Fujimura, K. (2004). Hand gesture recognition
using depth data. In Proceedings of the Sixth IEEE in-
ternational conference on Automatic face and gesture
recognition. IEEE Computer Society Press.
Oikonomidis, I., Kyriazis, N., and Argyros, A. A. (2011).
Full dof tracking of a hand interacting with an object
by modeling occlusions and physical constraints. In
Proceedings of the 2011 International Conference on
Computer Vision. IEEE Computer Society Press.
Plamondon, R. and Srihari, S. N. (2000). On-line and off-
line handwriting recognition: A comprehensive sur-
vey. In IEEE Transactions on Pattern Analysis and
Machine Intelligence. IEEE Computer Society Press.
Raheja, J., Chaudhary, A., and Singal, K. (2011). Track-
ing of fingertips and centers of palm using kinect. In
Computational Intelligence, Modelling and Simula-
tion (CIMSiM), 2011 Third International Conference
on.
Rehg, J. M. and Kanade, T. (1993). Digiteyes: Vision-based
human hand tracking. Technical report, School of
Computer Science, Carnegie Mellon University, 1993.
Ren, Z., Yuan, J., and Zhang, Z. (2011). Robust hand
gesture recognition based on finger-earth mover’s dis-
tance with a commodity depth camera. In Proceedings
of the 19th ACM international conference on Multime-
dia. ACM.
Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth
movers distance as a metric for image retrieval. In
International Journal of Computer Vision. Srpinger.
Sturman, D. J. and Zeltzer, D. (1994). A survey of glove-
based input. In IEEE Comput. Graph. Appl. IEEE
Computer Society Press.
Suau, X., Casas, J. R., and Ruiz-Hidalgo, J. (2011). Real-
time head and hand tracking. In Multimedia and
Expo (ICME), 2011 IEEE International Conference
on. IEEE Computer Society Press.
Utsumi, A. (1997). Direct manipulation scene creation in
3d: estimating hand postures from multiple-camera
images. In Proceedings of ACM SIGGRAPH 97.
ACM.
Wang, R., Paris, S., and Popovi
´
c, J. (2011). 6d hands: mark-
erless hand-tracking for computer aided design. In
Proceedings of the 24th annual ACM symposium on
User interface software and technology. ACM.
Wobbrock, J. O., Wilson, A. D., and Li, Y. (2007). Gestures
without libraries, toolkits or training: a $1 recognizer
for user interface prototypes. In Proceedings of the
20th annual ACM symposium on User interface soft-
ware and technology. ACM.
Zabulis, X., Baltzakis, H., and Argyros, A. A. (2009).
Vision-based hand gesture recognition for human
computer interaction. In The Universal Access Hand-
book. Lawrence Erlbaum Associates, Inc. (LEA), Se-
ries on ”Human Factors and Ergonomics”.
Zimmerman, T. G., Lanier, J., Blanchard, C., Bryson, S.,
and Harvill, Y. (1987). A hand gesture interface de-
vice. In Proceedings of the SIGCHI/GI. ACM.
ABare-HandGestureInteractionSystemforVirtualEnvironments
471