Figure 8: Examples of gait poses coming from the ex-
ploration application for various configurations of the slid-
ers. We can see inhibited/exaggerated versions and arbitrary
combinations of various styles.
been originally recorded, are however consistent with
the overall stylistic behaviour of the captured subject.
With this application, users have been able to get
their hands on a very high-dimension space (orig-
inally 3260 parameters) through a simplified GUI,
while not reducing nor hiding its complexity and vari-
ability. Compared to playing back original mocap se-
quences, the ability to browse a continuous stylistic
space in realtime is more interactive and user-centred.
Finally the representation of motion data through a
3D virtual character helps users to experience the
real motion and not a non-intuitive series of motion
curves. We think this made a huge difference in users’
ability to understand the ongoing stylistics.
6 CONCLUSIONS
In this paper we presented an innovative approach to
the exploration of stylistic motion capture databases
through a realtime motion synthesis framework. The
feasibility and pertinence of this approach has been
demonstrated on an expressive gait space exploration
use-case. Our application enables the user to freely
browse the stylistic space, exaggerate, inhibit or in-
vert the styles present in the training data, but also
to create new styles through combination of the exist-
ing styles. This reactive control provides a completely
new way of visualising and exploring the motion style
space. Since motion style is a notion difficult to de-
scribe or apprehend, we believe this approach to be a
valuable tool for the exploration and comprehension
of expert gestures which are a part of the intangible
cultural heritage which is very difficult to represent.
ACKNOWLEDGEMENTS
J. Tilmanne and T. Ravet are supported by the
European Union (FP7-ICT-2011-9), under grant
agreement n
o
600676 (i-Treasures project). N.
d’Alessandro is funded by a regional fund called
R
´
egion Wallonne FIRST Spin-Off. M. Astrinaki is
supported by a PhD grant funded by UMONS and
Acapela Group.
REFERENCES
Animazoo (2008). IGS-190. http://www.animazoo.com.
Astrinaki, M., D’Alessandro, N., Picart, B., Drugman, T.,
and Dutoit, T. (2012). Reactive and Continuous Con-
trol of HMM-Based Speech Synthesis. In IEEE Work-
shop on Spoken Language Technology.
Astrinaki, M., Moinet, A., and D’Alessandro, N. (2011).
MAGE: Reactive HMM-Based Software Library.
http://mage.numediart.org.
Blender (2002). Blender. http://www.blender.org.
Brand, M. and Hertzmann, A. (2000). Style Machines. In
27th Annual Conference on Computer Graphics and
Interactive Techniques, pages 183–192.
Grassia, F. S. (1998). Practical Parameterization of Rota-
tions Using the Exponential Map. Journal of Graphics
Tools, 3(3):29–48.
Li, Y., Wang, T., and Shum, H. Y. (2002). Motion Texture:
a Two-Level Statistical Model for Character Motion
Synthesis. In 29th Annual Conference on Computer
Graphics and Interactive Techniques, pages 465–472.
Tilmanne, J., Moinet, A., and Dutoit, T. (2012). Stylis-
tic Gait Synthesis Based on Hidden Markov Models.
EURASIP Journal on Advances in Signal Processing,
2012:72(1):1–14.
Tilmanne, J. and Ravet, T. (2010). The Mockey Database.
http://tcts.fpms.ac.be/
˜
tilmanne/.
Tokuda, K., Yoshimura, T., Masuko, T., Kobayashi, T., and
Kitamura, T. (2000). Speech Parameter Generation
Algorithms for HMM-Based Speech Synthesis. In
IEEE International Conference on Acoustics, Speech,
and Signal Processing, volume 3, pages 1315–1318.
Tokuda et al. (2008). HMM-Based Speech Synthesis Sys-
tem (HTS). http://hts.sp.nitech.ac.jp.
Wang, Y., Xie, L., Liu, Z., and Zhou, L. (2006). The
SOMN-HMM Model and Its Application to Auto-
matic Synthesis of 3D Character Animation. In IEEE
Conference on Systems, Man, and Cybernetics, pages
4948–4952.
Yoshimura, T., Tokuda, K., Masuko, T., Kobayashi, T., and
Kitamura, T. (1998). Duration Modelling for HMM-
Based Speech Synthesis. In 5th International Confer-
ence on Spoken Language Processing, pages 29–32.
Zen, H., Tokuda, K., and Black, A. W. (2009). Statisti-
cal Parametric Speech Synthesis. Speech Communi-
cation, 51(11):1039–1064.
ExplorationofaStylisticMotionSpaceThroughRealtimeSynthesis
809