Freitas, F., Peres, S., Lima, C., and Barbosa, F. (2014).
Grammatical facial expressions recognition with ma-
chine learning. In Proc. of 27th Florida Artificial In-
tell. Research Society Conf., pages 180–185. AAAI.
Gebre, B. G., Wittenburg, P., and Lenkiewicz, P. (2012).
Towards automatic gesture stroke detection. In 8th Int.
Conf. on Language Resources and Evaluation, pages
231–235. European Language Resources Association.
Hachaj, T. and Ogiela, M. R. (2014). Rule-based approach
to recognizing human body poses and gestures in real
time. Multimedia Systems, 20(1):81–99.
Haykin, S. S., Haykin, S. S., Haykin, S. S., and Haykin,
S. S. (2009). Neural networks and learning machines,
volume 3. Pearson Upper Saddle River, NJ, USA:.
Jacob, M. G. and Wachs, J. P. (2014). Context-based hand
gesture recognition for the operating room. Pattern
Recognition Letters, 36:196 – 203.
Kantz, H. and Schreiber, T. (2004). Nonlinear time series
analysis, volume 7. Cambridge university press.
Kendon, A. (1980). Gesticulation and speech: Two aspects
of the process of utterance. The Relationship of verbal
and nonverbal communication, pages 207–227.
Khan, S., Bailey, D., and Gupta, G. S. (2012). Detecting
pauses in continuous sign language. In Proc. of Int.
Conf. on Mechatronics and Mach. Vision in Practice,
pages 11–15. IEEE.
Kim, D., Song, J., and Kim, D. (2007). Simultaneous ges-
ture segmentation and recognition based on forward
spotting accumulative {HMMs}. Pattern Recognit.,
40(11):3012 – 3026.
Kita, S., Gijn, I., and Hulst, H. (1998). Movement phases in
signs and co-speech gestures, and their transcription
by human coders. In Proc. of Int. Gesture Workshop
Bielefeld, pages 23–35. Springer.
Kyan, M., Sun, G., Li, H., Zhong, L., Muneesawang, P.,
Dong, N., Elder, B., and Guan, L. (2015). An ap-
proach to ballet dance training through ms kinect and
visualization in a cave virtual reality environment.
ACM Trans. on Intell. Syst. Technol., 6(2):23:1–23:37.
Lee, G. C., Yeh, F.-H., and Hsiao, Y.-H. (2016). Kinect-
based taiwanese sign-language recognition system.
Multimedia Tools and Applications, 75(1):261–279.
Liang, H., Yuan, J., and Thalmann, D. (2014). Parsing
the hand in depth images. IEEE Trans. Multimedia,
16(5):1241–1253.
Lichman, M. (2013). UCI machine learning repository.
Liu, S., Feng, J., Domokos, C., Xu, H., Huang, J., Hu, Z.,
and Yan, S. (2014). Fashion parsing with weak color-
category labels. IEEE Trans. Multimedia, 16(1):253–
265.
Liu, S., Liang, X., Liu, L., Lu, K., Lin, L., Cao, X., and Yan,
S. (2015). Fashion parsing with video context. IEEE
Trans. Multimedia, 17(8):1347–1358.
L¨ucking, A., Bergmann, K., Hahn, F., Kopp, S., and Rieser,
H. (2013). Data-based analysis of speech and gesture:
the Bielefeld Speech and Gesture Alignment corpus
(SaGA) and its applications. J. on Multimodal User
Interfaces, 7(1-2).
Madeo, R. C. B., Peres, S. M., B´ıscaro, H. H., Dias, D. B.,
and Boscarioli, C. (2010). A committee machine im-
plementing the pattern recognition module for finger-
spelling applications. In Proc. of the ACM Symposium
on Applied Computing, pages 954–958.
Madeo, R. C. B., Peres, S. M., and Lima, C. A. (2016).
Gesture phase segmentation using support vector ma-
chines. Expert Syst Appl., 56:100 – 115. In press.
Madeo, R. C. B., Wagner, P. K., and Peres, S. M. (2013).
A review of temporal aspects of hand gesture analysis
applied to discourse analysis and natural conversation.
Int. J. of C. Sci. & Inf. Tech., 5(4).
Martell, C. and Kroll, J. (2007). Corpus-based gesture anal-
ysis: an extension of the form dataset for the auto-
matic detection of phases in a gesture. Int. J. of Se-
mantic Computing, 1(04):521–536.
McNeill, D. (1992). Hand and mind: What the hands reveal
about thought.
McNeill, D. (2015). Why We Gesture: The Surprising Role
of Hand Movements in Communication. Cambridge
University Press.
Ong, S. C. and Ranganath, S. (2005). Automatic sign lan-
guage analysis: A survey and the future beyond lexi-
cal meaning. IEEE Trans. Pattern Anal. Mach. Intell.,
27(6):873–891.
Popa, D., Simion, G., Gui, V., and Otesteanu, M. (2008).
Real time trajectory based hand gesture recognition.
WSEAS Trans. on Inf. Sci. and Appl., 5(4):532–546.
Ramakrishnan, A. S. and Neff, M. (2013). Segmentation of
hand gestures using motion capture data. In Proc. of
the Int. Conf. on Autonomous Agents and Multi-agent
Systems, pages 1249–1250.
Rosani, A., Conci, N., and Natale, F. G. B. D. (2014). Hu-
man behavior recognition using a context-free gram-
mar. J. of Electronic Imaging, 23(3):033016.
Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., and Jou-
blin, F. (2012). Generation and evaluation of com-
municative robot gesture. Int. J. of Social Robotics,
4(2):201–217.
Semmlow, J. and Griffel, B. (2014). Biosignal and Medical
Image Processing, Third Edition. Taylor & Francis.
Smith, N. A. (2011). Linguistic Structure Prediction. Mor-
gan & Claypool.
Spano, L. D., Cisternino, A., and Patern`o, F. (2012). A
compositional model for gesture definition. In Int.
Conf. on Human-Centred Softw. Eng., pages 34–52.
Springer.
Xu, W. and Lee, E.-J. (2011). Hand gesture recognition
using improved hidden markov models. J. of Korea
Multimedia Soc., 14(7):866–871.
Yin, Y. and Davis, R. (2014). Real-time continuous ges-
ture recognition for natural human-computer interac-
tion. In IEEESymp. on Visual Languages and Human-
Centric Computing, pages 113–120.
Zhu, L., Chen, Y., Lin, C., and Yuille, A. L. (2011). Max
margin learning of hierarchical configural deformable
templates (hcdts) for efficient object parsing and pose
estimation. Int. J. of Computing Vision, 93(1):1–21.