loading
Documents

Research.Publish.Connect.

Paper

Authors: Diego Gonçalves 1 ; Maria Baranauskas 1 ; Julio Reis 1 and Eduardo Todt 2

Affiliations: 1 Institute of Computing, University of Campinas, São Paulo, Brazil ; 2 Department of Informatics, Universidade Federal do Paraná, Curitiba, Brazil

ISBN: 978-989-758-423-7

Keyword(s): 3D Avatar, Sign Language, Facial Expression.

Abstract: Systems that use virtual environments with avatars for information communication are of fundamental importance in contemporary life. They are even more relevant in the context of supporting sign language communication for accessibility purposes. Although facial expressions provide message context and define part of the information transmitted, e.g., irony or sarcasm, facial expressions are usually considered as a static background feature in a primarily gestural language in computational systems. This article proposes a novel parametric model of facial expression synthesis through a 3D avatar representing complex facial expressions leveraging emotion context. Our technique explores interpolation of the base expressions in the geometric animation through centroids control and Spatio-temporal data. The proposed method automatically generates complex facial expressions with controllers that use region parameterization as in manual models used for sign language representation. Our approac h to the generation of facial expressions adds emotion to representation, which is a determining factor in defining the tone of a message. This work contributes with the definition of non-manual markers for Sign Languages 3D Avatar and the refinement of the synthesized message in sign languages, proposing a complete model for facial parameters and synthesis using geometric centroid regions interpolation. A dataset with facial expressions was generated using the proposed model and validated using machine learning algorithms. In addition, evaluations conducted with the deaf community showed a positive acceptance of the facial expressions and synthesized emotions. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.204.187.106

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Gonçalves, D.; Baranauskas, M.; Reis, J. and Todt, E. (2020). Facial Expressions Animation in Sign Language based on Spatio-temporal Centroid.In Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-758-423-7, pages 463-475. DOI: 10.5220/0009344404630475

@conference{iceis20,
author={Diego Addan Gon\c{C}alves. and Maria Cecília Calani Baranauskas. and Julio César dos Reis. and Eduardo Todt.},
title={Facial Expressions Animation in Sign Language based on Spatio-temporal Centroid},
booktitle={Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 2: ICEIS,},
year={2020},
pages={463-475},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009344404630475},
isbn={978-989-758-423-7},
}

TY - CONF

JO - Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 2: ICEIS,
TI - Facial Expressions Animation in Sign Language based on Spatio-temporal Centroid
SN - 978-989-758-423-7
AU - Gonçalves, D.
AU - Baranauskas, M.
AU - Reis, J.
AU - Todt, E.
PY - 2020
SP - 463
EP - 475
DO - 10.5220/0009344404630475

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.