Authors:
Simon Senecal
;
Niels A. Nijdam
and
Nadia Magnenat Thalmann
Affiliation:
University of Geneva, Geneva and Switzerland
Keyword(s):
Modelling of Natural Scenes and Phenomena, Motion Analysis, Couple Dance, Motion Features, Machine Learning.
Related
Ontology
Subjects/Areas/Topics:
Animation Algorithms and Techniques
;
Animation and Simulation
;
Computer Vision, Visualization and Computer Graphics
;
Computer-Supported Education
;
e-Learning
;
e-Learning Applications and Computer Graphics
;
Games for Education and Training
;
Geometry and Modeling
;
Interactive Environments
;
Model Validation
;
Modeling and Algorithms
;
Modeling of Natural Scenes and Phenomena
Abstract:
Learning couple dance such as Salsa is a challenge for the modern human as it requires to assimilate and understand correctly all the dance parameters. Traditionally learned with a teacher, some situation and the variability of dance class environment can impact the learning process. Having a better understanding of what is a good salsa dancer from motion analysis perspective would bring interesting knowledge and can complement better learning. In this paper, we propose a set of music and interaction based motion features to classify salsa dancer couple performance in three learning states (beginner, intermediate and expert). These motion features are an interpretation of components given via interviews from teacher and professionals and other dance features found in systematic review of papers. For the presented study, a motion capture database (SALSA) has been recorded of 26 different couples with three skill levels dancing on 10 different tempos (260 clips). Each recorded clips co
ntains a basic steps sequence and an extended improvisation sequence during two minutes in total at 120 frame per second. Each of the 27 motion features have been computed on a sliding window that corresponds to the 8 beats reference for dance. Different multiclass classifier has been tested, mainly k-nearest neighbours, Random forest and Support Vector Machine, with an accuracy result of classification up to 81% for three levels and 92% for two levels. A later feature analysis validates 23 out of 27 proposed features. The work presented here has profound implications for future studies of motion analysis, couple dance learning and human-human interaction.
(More)