Authors:
Jerry Schnepp
1
;
Rosalee Wolfe
2
;
John McDonald
2
and
Jorge Toro
3
Affiliations:
1
Bowling Green State University, United States
;
2
DePaul University, United States
;
3
Worchester Polytechnic Institute, United States
Keyword(s):
Avatar Technology, Virtual Agents, Facial Animation, Accessibility Technology for People who are Deaf, American Sign Language.
Related
Ontology
Subjects/Areas/Topics:
Animation and Simulation
;
Animation Systems
;
Computer Vision, Visualization and Computer Graphics
;
Gesture Generation
;
Social Agents and Avatars
;
Social Agents in Computer Graphics
;
Social and Conversational Agents
Abstract:
Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly promising because the two processes move an avatar’s brows in a competing manner. This brings the state of the art one step closer to the goal of an automatic English-to-ASL translator.