Authors:
Genki Sakata
1
;
Naoshi Kaneko
2
;
Dai Hasegawa
3
and
Shinichi Shirakawa
1
Affiliations:
1
Yokohama National University, Yokohama, Kanagawa, Japan
;
2
Aoyama Gakuin University, Sagamihara, Kanagawa, Japan
;
3
Hokkai Gakuen University, Sapporo, Hokkaido, Japan
Keyword(s):
Gesture Generation, Spoken Text, Multilingual Model, Neural Networks, Deep Learning, Human-Agent Interaction.
Abstract:
Automatic gesture generation for speech audio or text can reduce the human effort required to manually create the gestures of embodied conversational agents. Currently, deep learning-based gesture generation models trained using a large-scale speech–gesture dataset are being investigated. Large-scale gesture datasets are currently limited to English speakers. Creating these large-scale datasets is difficult for other languages. We aim to realize a language-agnostic gesture generation model that produces gestures for a target language using a different-language gesture dataset for model training. The current study presents two simple methods that generate gestures for Japanese using only the text-to-gesture model trained on an English dataset. The first method translates Japanese speech text into English and uses the translated word sequence as input for the text-to-gesture model. The second method leverages a multilingual embedding model that embeds sentences in the same feature spac
e regardless of language and generates gestures, enabling us to use the English text-to-gesture model to generate Japanese speech gestures. We evaluated the generated gestures for Japanese speech and showed that the gestures generated by our methods are comparable to the actual gestures in several cases, and the second method is promising compared to the first method.
(More)