Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language

Jerry Schnepp, Rosalee Wolfe, John McDonald, Jorge Toro

2013

Abstract

Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly promising because the two processes move an avatar’s brows in a competing manner. This brings the state of the art one step closer to the goal of an automatic English-to-ASL translator.

References

  1. Bridges, B. and Metzger, M., 1996. Deaf Tend Your: NonManual Signals in American Sign Language. Silver Spring, MD: Calliope Press.
  2. Ekman, P. and Friesen, W., 1978. Facial Action Coding System. Palo Alto, CA: Consulting Psychologist Press.
  3. Elliott, R., Glauert, J. and Kennaway, J., 2004. A framework for non-manual gestures in a synthetic signing system. In: CWUAAT 04, Proceedings of the Second Cambridge Workshop on Universal Access and Assistive Technology. Cambridge, UK, 22-24 March 2004.
  4. Elliott, R., Glauert, J., Kennaway, J., Marshall, I. and Safar, E., 2007. Linguistic modelling and languageprocessing technologies for avatar-based sign language presentation. Universal Access in the Information Society, 6(4) pp.375-391.
  5. Erting, E., 1992. Why can't Sam read? Sign Language Studies. 75(2), pp. 97-112.
  6. Gibet, S., Courty, N., Duarte, K. and Le Naour, T., 2011.The signcom system for data- driven animation of interactive virtual signers: Methodology and evaluation. ACM Transactions on interactive intelligent systems. 1 (1), 1-26.
  7. Grieve-Smith, A., 2002. SignSynth: A sign language synthesis application using Web3D and Perl. In: I Wachsmuth and T Sowa, eds. Gesture and Sign Language in Human-Computer Interaction. Lecture Notes in Computer Science. 2298/2002 Berlin, Germany: Springer-Verlag. pp. 37-53.
  8. Hanke, T., 2004. HamNoSys -- Representing sign language data in language resources and language processing contexts. In: LREC 2004: Fourth International Conference on Language Resources and Evaluation Representation and Processing of Sign Languages Workshop. Lisbon, Portugal, 24-30 May 2004. Paris: European Language Resources Association.
  9. Hermans, D., Ormel, E., Knoors, H. and Verhoeven, L., 2008. The relationship between the reading and signing skills of deaf children in bilingual education programs. Journal of Deaf Studies and Deaf Education, 13(4), pp. 518-530.
  10. Huenerfauth, M., Lu, P. and Rosenberg, A., 2011. Evaluating importance of facial expression in American Sign Language and pidgin signed English animations. In: ASSETS'11: Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. Dundee, UK, 22 - 24 October 2011. New York, NY: ACM.
  11. Kalra P., Mangili A., Magnenat-Thalmann N. and Thalmann D., 1991. SMILE: A Multilayered Facial Animation System, In: IFIP WG 5.10: Proceedings of International Federation of Information Processing.Tokyo, Japan, 8-12 April 1991. Berlin, Germany: Springer.
  12. Karpouzis, K., Caridakis, G., Fotinea, S.-E. and Efthimiou, E., 2007. Educational resources and implementation of a Greek sign language synthesis architecture. Computers & Education. 49(1), pp. 54- 74.
  13. Krnoul, Z., 2010. Correlation analysis of facial features and sign gestures. In: ICSP 2010: Proceedings of the 2010 IEEE 10th International Conference on Signal Processing. Beijing, China, 24-28 October 2010. Washington, DC: IEEE.
  14. Lee, J. and Kunii, T., 1993. Computer animated visual translation from natural language to sign language. The Journal of visualization and computer animation. 4(2), pp. 63-68.
  15. Liddell, S., 2003. Grammar, Gesture, And Meaning in American Sign Language. Cambridge, UK: Cambridge University Press.
  16. Lombardo, V., Battaglino, C., Damiario, R. and Nunnari, F., 2011. A avatar-based interface for Italian Sign Language. In: CISIS 2011: Proceedings of the 2011 International Conference on Complex, Intelligent, and Software Intensive Systems. Seoul, Korea, 30 June - 2 July 2011. Washington, DC: IEEE.
  17. López-Colino, F. and Colás, J., 2012. Spanish Sign Language synthesis system. Journal of Visual Languages and Computing. 23(3), pp. 121-136.
  18. Magnenat-Thalmann, N., Primeau, E. and Thalmann, D., 1987. Abstract Muscle Action Procedures for Human Face Animation. The Visual Computer. 3(5), pp. 290- 297.
  19. Miranda, J.C., Alvarez, X., Orvalho, J., Gutierrez, D., Sousa, A. and Orvalho, V., 2012. Sketch express: A sketching interface for facial animation. Computers & Graphics, 36(6) , pp. 585-595.
  20. Pandžic, I. and Forchheimer, R., 2003. MPEG-4 Facial Animation: The Standard, Implementation And Applications. Hoboken, NJ: Wiley.
  21. Parke, F. and Waters, K., 1996. Computer Facial Animation. Wellesley, MA: A.K. Peters.
  22. Rudser, S., 1988. Sign language instruction and its implications for the Deaf. In: M. Strong, ed. 1988. Language Learning and Deafness. New York: Cambridge University Press, pp. 99-112.
  23. Schnepp, J., Wolfe, R., Shiver, B., McDonald, J. and Toro, J., 2011. SignQUOTE: A remote testing facility for eliciting signed qualitative feedback. In SLTAT 2011: Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology. Dundee, UK, 23 October 2011. Dundee: University of Dundee.
  24. Schnepp, J. 2012. A representation of selected nonmanual signals in American Sign Language. Ph.D. DePaul University.
  25. Weast, T., 2008. Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. Ph.D. The University of Texas, Arlington.
  26. Werner, S., Wolff, M. and Hoffman, R., 2006. Pronunciation variant selection for spontaneous speech synthesis: Listening effort as a quality parameter. In: IEEE ICASSP, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Toulouse, France, 14-19 May 2006. Washington, DC: IEEE.
  27. Zhao, L., Kipper, K., Schuler, W., Vogler, C. and Palmer, M., 2000. A machine translation system from English to American Sign Language. In: J.S. White, ed. 2000. Envisioning Machine Translation in the Information Age. Lecture Notes in Computer Science. 1934/2000 Berlin, Germany: Springer-Verlag. pp. 191-193.
Download


Paper Citation


in Harvard Style

Schnepp J., Wolfe R., McDonald J. and Toro J. (2013). Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language . In Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2013) ISBN 978-989-8565-46-4, pages 407-416. DOI: 10.5220/0004217004070416


in Bibtex Style

@conference{grapp13,
author={Jerry Schnepp and Rosalee Wolfe and John McDonald and Jorge Toro},
title={Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language},
booktitle={Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2013)},
year={2013},
pages={407-416},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004217004070416},
isbn={978-989-8565-46-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2013)
TI - Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language
SN - 978-989-8565-46-4
AU - Schnepp J.
AU - Wolfe R.
AU - McDonald J.
AU - Toro J.
PY - 2013
SP - 407
EP - 416
DO - 10.5220/0004217004070416