Erwig, M., G
¨
uting, R. H., Schneider, M., and Vazirgiannis,
M. (1998). Abstract and discrete modeling of spatio-
temporal data types. In Proceedings of the 6th ACM
International Symposium on Advances in Geographic
Information Systems, GIS ’98, pages 131–136, New
York, NY, USA. ACM.
Feng, R. and Prabhakaran, B. (2016). On the ”face of
things”. In Proceedings of the 2016 ACM on Inter-
national Conference on Multimedia Retrieval, ICMR
’16, pages 3–4, New York, NY, USA. ACM.
Gloderer, M. and Hertle, A. (2010). Spline-based trajectory
optimization for autonomous vehicles with ackerman
drive.
Grif, M. and Manueva, Y. (2016). Semantic analyses of text
to translate to russian sign language. In 2016 11th In-
ternational Forum on Strategic Technology (IFOST),
pages 286–289.
Happy, S. and Routray, A. (2015). Automatic facial ex-
pression recognition using features of salient facial
patches. Affective Computing, IEEE Transactions on,
6(1):1–12.
Huenerfauth, M., Lu, P., and Rosenberg, A. (2011). Eval-
uating importance of facial expression in american
sign language and pidgin signed english animations.
In The Proceedings of the 13th International ACM
SIGACCESS Conference on Computers and Accessi-
bility, ASSETS ’11, pages 99–106, New York, NY,
USA. ACM.
Hyde, J., Carter, E. J., Kiesler, S., and Hodgins, J. K. (2016).
Evaluating animated characters: Facial motion magni-
tude influences personality perceptions. ACM Trans.
Appl. Percept., 13(2):8:1–8:17.
Iatskiu, C. E. A., Garc
´
ıa, L. S., and Antunes, D. R. (2017).
Automatic signwriting generation of libras signs from
core-sl. In Proceedings of the XVI Brazilian Sympo-
sium on Human Factors in Computing Systems, IHC
2017, pages 55:1–55:4, New York, NY, USA. ACM.
Kacorri, H. (2015). TR-2015001: A Survey and Critique of
Facial Expression Synthesis in Sign Language Anima-
tion. CUNY Academic Works.
Kacorri, H. and Huenerfauth, M. (2014). Implementa-
tion and evaluation of animation controls sufficient for
conveying asl facial expressions. In Proceedings of the
16th International ACM SIGACCESS Conference on
Computers & Accessibility, ASSETS ’14, pages 261–
262, New York, NY, USA. ACM.
Kacorri, H., Huenerfauth, M., Ebling, S., Patel, K., and
Willard, M. (2015). Demographic and experiential
factors influencing acceptance of sign language an-
imation by deaf users. In Proceedings of the 17th
International ACM SIGACCESS Conference on Com-
puters & Accessibility, ASSETS ’15, pages 147–
154, New York, NY, USA. ACM.
Kaur, S. and Singh, M. (2015). Indian sign language ani-
mation generation system. In Next Generation Com-
puting Technologies (NGCT), 2015 1st International
Conference on, pages 909–914.
Le, V., Tang, H., and Huang, T. (2011). Expression recog-
nition from 3d dynamic faces using robust spatio-
temporal shape features. In Automatic Face Gesture
Recognition and Workshops (FG 2011), 2011 IEEE
International Conference on, pages 414–421.
Lee, J., Han, B., and Choi, S. (2016). Interactive motion
effects design for a moving object in 4d films. In Pro-
ceedings of the 22Nd ACM Conference on Virtual Re-
ality Software and Technology, VRST ’16, pages 219–
228, New York, NY, USA. ACM.
Lemaire, P., Ben Amor, B., Ardabilian, M., Chen, L., and
Daoudi, M. (2011). Fully automatic 3d facial expres-
sion recognition using a region-based approach. In
Proceedings of the 2011 Joint ACM Workshop on Hu-
man Gesture and Behavior Understanding, J-HGBU
’11, pages 53–58, New York, NY, USA. ACM.
Li, H., Kulik, L., and Ramamohanarao, K. (2014). Spatio-
temporal trajectory simplification for inferring travel
paths. In Proceedings of the 22Nd ACM SIGSPATIAL
International Conference on Advances in Geographic
Information Systems, SIGSPATIAL ’14, pages 63–72,
New York, NY, USA. ACM.
Lombardo, V., Battaglino, C., Damiano, R., and Nunnari, F.
(2011). An avatar-based interface for the italian sign
language. In Complex, Intelligent and Software Inten-
sive Systems (CISIS), 2011 International Conference
on, pages 589–594.
Lundqvist, D., . L. J. E. (1998). The averaged karolin-
ska directed emotional faces - akdef. In CD ROM
from Department of Clinical Neuroscience, Psychol-
ogy section.
Lv, S., Da, F., and Deng, X. (2015). A 3d face recogni-
tion method using region-based extended local binary
pattern. In Image Processing (ICIP), 2015 IEEE In-
ternational Conference on, pages 3635–3639.
Lyons, M. J., Akemastu, S., Kamachi, M., and Gyoba,
J. (1998). Coding Facial Expressions with Gabor
Wavelets, 3rd IEEE International Conference on Au-
tomatic Face and Gesture Recognition.
Mahmoud, M. M., Baltru
ˇ
saitis, T., and Robinson, P. (2014).
Automatic detection of naturalistic hand-over-face
gesture descriptors. In Proceedings of the 16th Inter-
national Conference on Multimodal Interaction, ICMI
’14, pages 319–326, New York, NY, USA. ACM.
Martin Erwig, M. S. and G
¨
uting, R. H. (1998). Temporal
objects for spatio-temporal data models and a compar-
ison of their representations. In Int. Workshop on Ad-
vances in Database Technologies, LNCS 1552, pages
454–465.
Neidle, C., Bahan, B., MacLaughlin, D., Lee, R. G., and
Kegl, J. (1998). Realizations of syntactic agree-
ment in american sign language: Similarities between
the clause and the noun phrase. Studia Linguistica,
52(3):191–226.
Obaid, M., Mukundan, R., Billinghurst, M., and Pelachaud,
C. (2010). Expressive mpeg-4 facial animation using
quadratic deformation models. In Computer Graph-
ics, Imaging and Visualization (CGIV), 2010 Seventh
International Conference on, pages 9–14.
Oliveira, M., Chatbri, H., Little, S., O’Connor, N. E., and
Sutherland, A. (2017). A comparison between end-
to-end approaches and feature extraction based ap-
proaches for sign language recognition. In 2017 Inter-
national Conference on Image and Vision Computing
New Zealand (IVCNZ), pages 1–6.
Punchimudiyanse, M. and Meegama, R. (2015). 3d sign-
ing avatar for sinhala sign language. In Industrial and
ICEIS 2020 - 22nd International Conference on Enterprise Information Systems
474