loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Genki Sakata 1 ; Naoshi Kaneko 2 ; Dai Hasegawa 3 and Shinichi Shirakawa 1

Affiliations: 1 Yokohama National University, Yokohama, Kanagawa, Japan ; 2 Aoyama Gakuin University, Sagamihara, Kanagawa, Japan ; 3 Hokkai Gakuen University, Sapporo, Hokkaido, Japan

Keyword(s): Gesture Generation, Spoken Text, Multilingual Model, Neural Networks, Deep Learning, Human-Agent Interaction.

Abstract: Automatic gesture generation for speech audio or text can reduce the human effort required to manually create the gestures of embodied conversational agents. Currently, deep learning-based gesture generation models trained using a large-scale speech–gesture dataset are being investigated. Large-scale gesture datasets are currently limited to English speakers. Creating these large-scale datasets is difficult for other languages. We aim to realize a language-agnostic gesture generation model that produces gestures for a target language using a different-language gesture dataset for model training. The current study presents two simple methods that generate gestures for Japanese using only the text-to-gesture model trained on an English dataset. The first method translates Japanese speech text into English and uses the translated word sequence as input for the text-to-gesture model. The second method leverages a multilingual embedding model that embeds sentences in the same feature spac e regardless of language and generates gestures, enabling us to use the English text-to-gesture model to generate Japanese speech gestures. We evaluated the generated gestures for Japanese speech and showed that the gestures generated by our methods are comparable to the actual gestures in several cases, and the second method is promising compared to the first method. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.15.182.29

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Sakata, G.; Kaneko, N.; Hasegawa, D. and Shirakawa, S. (2023). Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - HUCAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 47-54. DOI: 10.5220/0011643600003417

@conference{hucapp23,
author={Genki Sakata. and Naoshi Kaneko. and Dai Hasegawa. and Shinichi Shirakawa.},
title={Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - HUCAPP},
year={2023},
pages={47-54},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011643600003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - HUCAPP
TI - Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model
SN - 978-989-758-634-7
IS - 2184-4321
AU - Sakata, G.
AU - Kaneko, N.
AU - Hasegawa, D.
AU - Shirakawa, S.
PY - 2023
SP - 47
EP - 54
DO - 10.5220/0011643600003417
PB - SciTePress