Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model
Genki Sakata, Naoshi Kaneko, Dai Hasegawa, Shinichi Shirakawa
2023
Abstract
Automatic gesture generation for speech audio or text can reduce the human effort required to manually create the gestures of embodied conversational agents. Currently, deep learning-based gesture generation models trained using a large-scale speech–gesture dataset are being investigated. Large-scale gesture datasets are currently limited to English speakers. Creating these large-scale datasets is difficult for other languages. We aim to realize a language-agnostic gesture generation model that produces gestures for a target language using a different-language gesture dataset for model training. The current study presents two simple methods that generate gestures for Japanese using only the text-to-gesture model trained on an English dataset. The first method translates Japanese speech text into English and uses the translated word sequence as input for the text-to-gesture model. The second method leverages a multilingual embedding model that embeds sentences in the same feature space regardless of language and generates gestures, enabling us to use the English text-to-gesture model to generate Japanese speech gestures. We evaluated the generated gestures for Japanese speech and showed that the gestures generated by our methods are comparable to the actual gestures in several cases, and the second method is promising compared to the first method.
DownloadPaper Citation
in Harvard Style
Sakata G., Kaneko N., Hasegawa D. and Shirakawa S. (2023). Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 2: HUCAPP; ISBN 978-989-758-634-7, SciTePress, pages 47-54. DOI: 10.5220/0011643600003417
in Bibtex Style
@conference{hucapp23,
author={Genki Sakata and Naoshi Kaneko and Dai Hasegawa and Shinichi Shirakawa},
title={Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 2: HUCAPP},
year={2023},
pages={47-54},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011643600003417},
isbn={978-989-758-634-7},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 2: HUCAPP
TI - Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model
SN - 978-989-758-634-7
AU - Sakata G.
AU - Kaneko N.
AU - Hasegawa D.
AU - Shirakawa S.
PY - 2023
SP - 47
EP - 54
DO - 10.5220/0011643600003417
PB - SciTePress