Authors:
Diego Addan Gonçalves
1
;
Maria Cecília Calani Baranauskas
1
;
Julio César dos Reis
1
and
Eduardo Todt
2
Affiliations:
1
Institute of Computing, University of Campinas, São Paulo, Brazil
;
2
Department of Informatics, Universidade Federal do Paraná, Curitiba, Brazil
Keyword(s):
3D Avatar, Sign Language, Facial Expression.
Abstract:
Systems that use virtual environments with avatars for information communication are of fundamental importance in contemporary life. They are even more relevant in the context of supporting sign language communication for accessibility purposes. Although facial expressions provide message context and define part of the information transmitted, e.g., irony or sarcasm, facial expressions are usually considered as a static background feature in a primarily gestural language in computational systems. This article proposes a novel parametric model of facial expression synthesis through a 3D avatar representing complex facial expressions leveraging emotion context. Our technique explores interpolation of the base expressions in the geometric animation through centroids control and Spatio-temporal data. The proposed method automatically generates complex facial expressions with controllers that use region parameterization as in manual models used for sign language representation. Our approa
ch to the generation of facial expressions adds emotion to representation, which is a determining factor in defining the tone of a message. This work contributes with the definition of non-manual markers for Sign Languages 3D Avatar and the refinement of the synthesized message in sign languages, proposing a complete model for facial parameters and synthesis using geometric centroid regions interpolation. A dataset with facial expressions was generated using the proposed model and validated using machine learning algorithms. In addition, evaluations conducted with the deaf community showed a positive acceptance of the facial expressions and synthesized emotions.
(More)