animation variation styles. By incorporating the
variations with the basic animation types, new and
unique character animations can be produced. The
presented procedural animation method is a proof of
concept production, and as such, only a selection of
the animation types and variations are tested using
our proposed method. The Walk, Run, and Jump
animation types with the Masculine to Feminine,
Old to Young, and Tired to Energetic variations are
included. Yet the system can be extended to include
other styles in the future.
We have proposed a novel mathematical model
for describing human actions with various styles; the
same model has been used to describe each step. The
proposed transformation method is based on
applying a transfer function obtained during a
training phase to the base motion sequence in order
to create a desired motion. The first step for style
transformation of actions is for the training data to
be temporally equal in length i.e. motion data
matrices must have same number of frames in order
for us to perform mathematical operations on them.
We have proposed a novel piece wise time warping
technique to convert our motion data sets to data sets
of the same temporal length. The model has then
been used to generate the necessary transfer
functions for style transformations between different
styles of the same action.
To test the outputs of our transformation
techniques we formed a questionnaire and arranged
for participants to exercise various transformations
applied on different actions. The user comments and
ratings confirm the significance of our research. Our
procedural animation method offers animators high
quality animations produced from an optical motion
capture session, without incurring the cost of
running their own sessions. This method utilizes a
database of common animations sequences, derived
from several motion capture sessions, which
animators can manipulate and apply to their own
existing characters through the use of our procedural
animation technique.
2 RELATED WORK
In recent years, much research has been
conducted with the aim of synthesizing human
motion sequences. Statistical models have been one
of the practical tools for human motion synthesis
(Tanco & Hilton, 2000; Li, et al., 2002). Tanco and
Hilton (2000, pp. 137-142) have trained a statistical
model which employs a database of motion capture
data for synthesizing realistic motion sequences and
using the start and end of existing keyframes,
original motion data are produced. Li et al. (2002,
pp. 465-472) define a motion texture as a set of
textons and their distribution values provided in a
distribution matrix. The motion texton is modeled by
a linear dynamic system (LDS). A maximum
likelihood algorithm is designed to learn from a set
of motion capture based textons. Finally, the learnt
motion textures have been used to interactively edit
motion sequences.
Egges et al. (2004, pp. 121-130) have employed
principal component analysis (PCA) to synthesize
human motion with the two deviations of small
posture variations and change of balance. This
approach is useful in cases where an animated
character is in a stop/freeze situation where in reality
no motionless character exists. Liu and Papovic
(2002, pp. 408-416) have applied linear and angular
momentum constraints to avoid computing muscle
forces of the body for simple and rapid synthesis of
human motion. Creating complex dynamic motion
samples such as swinging and leaping have been
carried out by Fang and Pollard (2003, pp. 417-426)
using an optimization techniques applied along with
a set of constraints, minimizing the objective
function. Pullen and Bregler (2002, pp. 501-508)
have trained a system that is capable of synthesizing
motion sequences based on the key frames selected
by the user. Their method employs the characteristic
of correlation between different joint values to create
the missing frames. In the end quadratic fit has been
used to smooth the estimated values, resulting in
more realistic looking results. Brand and Hertzmann
(2000, pp. 183-192) employ probabilistic models for
interpolation and extrapolation of different styles for
synthesis of new stylistic dance sequences using a
cross-entropy optimization structure which enables
their style machine to learn from various style
examples. Safonova et al. (2004, pp. 514-521) define
an optimization problem for reducing the
dimensionality of the feature space of a motion
capture database, resulting in specific features.
These features are then used to synthesize various
motion sequences such as walk, run, jump and even
several flips. This research shows that the complete
feature space is not required for synthesis of human
motion. We have employed this property in section 5
where correlated joints have been ignored when
transforming the actor style themes.
Hsu et al. (2005, pp. 1082-1089) conduct style
translations such as sideways walk and crouching
walk based on a series of alignment mappings
followed by space warping techniques using an LTI
model. While this technique shows to be functional
GRAPP 2010 - International Conference on Computer Graphics Theory and Applications
308