Authors:
Ryota Inoue
;
Tsubasa Hirakawa
;
Takayoshi Yamashita
and
Hironobu Fujiyoshi
Affiliation:
Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi, Japan
Keyword(s):
Deep Learning, 2D Motion Generation, Variational AutoEncoder, Language Model.
Abstract:
Various methods have been proposed for generating human motion from text due to advancements in large language models and diffusion models. However, most research has focused primarily on 3D motion generation. While 3D motion enables realistic representations, the creation and collection of datasets using motion-capture technology is costly, and its application to downstream tasks, such as pose-guided human video generation, is limited. Therefore, we propose 2D Convolutional Motion Generative Pre-trained Transformer (2CM-GPT), a method for generating two-dimensional (2D) motion from text. 2CM-GPT is based on the framework of Mo-tionGPT, a method for 3D motion generation, and uses a motion tokenizer to convert 2D motion into motion tokens while learning the relationship between text and motion using a language model. Unlike MotionGPT, which utilizes 1D convolution for processing 3D motion, 2CM-GPT uses 2D convolution for processing 2D motion. This enables more effective capture of spa
tial relationships between joints. Evaluation experiments demonstrated that 2CM-GPT is effective in both motion reconstruction and text-guided 2D motion generation. The generated 2D motion is also shown to be effective for pose-guided human video generation.
(More)