loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Yanbo Cheng and Yingying Wang

Affiliation: Department of Computing and Software, McMaster University, Hamilton, Canada

Keyword(s): Deep Learning, Character Animation, Motion Synthesis, Motion Stylization, Multimodal Synchronization.

Abstract: Human dance motions are complex, creative, and artistic expressions. Synthesizing high-quality dance motions and synchronizing them to music has always been a challenge in animation research. Three problems in synthesizing dance motions are presented: 1) dance movements are complex non-linear motions that follow high-level structures of the dance genre over a long horizon, yet must maintain a stylistic consistency; 2) even for the same genre, dance movements require diversity, expressiveness, and nuances to appear natural and realistic; 3) spatial-temporal features of dance movements can be influenced by music. In this paper, we address these issues using a novel two-level transformer-based dance generation system that can synthesize dance motions to match the audio input. Our high-level transformer network performs the choreography and generates dance movements with consistent long-term structure, and our low-level implementer infuses diversity and realizes actual dance performances . This two-level approach not only allows us to generate dances that are consistent in structure, but also enables us to effectively add styles learnt from a wide range of dance datasets. When training the choreography model, our approach fully utilizes existing dance datasets, even those without musical accompaniment, and thus differs from previous research that requires dance training data to be accompanied by music. Results in this work demonstrate that our two-level system generates high-quality dance motions that flexibly adapt to varying musical conditions trained on a dataset of dance sequences without accompanying music. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.133.118.144

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Cheng, Y. and Wang, Y. (2024). Transformer-Based Two-level Approach for Music-driven Dance Choreography. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 127-139. DOI: 10.5220/0012434500003660

@conference{grapp24,
author={Yanbo Cheng and Yingying Wang},
title={Transformer-Based Two-level Approach for Music-driven Dance Choreography},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP},
year={2024},
pages={127-139},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012434500003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP
TI - Transformer-Based Two-level Approach for Music-driven Dance Choreography
SN - 978-989-758-679-8
IS - 2184-4321
AU - Cheng, Y.
AU - Wang, Y.
PY - 2024
SP - 127
EP - 139
DO - 10.5220/0012434500003660
PB - SciTePress