2 RELATED WORK
The techniques used for the rigging of 3D characters
to create convincing facial animation can be broadly
divided into two distinct areas. The first are those
that focus on mimicking the movement of the face
surface only, attempting only to replicate facial
poses using surface deformations (Guenter et al.,
1998; Kalra et al., 1992). The second are those that
model the anatomy of the face, attempting to
replicate the movement of bones and muscles within
a virtual framework (Lee et al., 1995; Platt and
Badler, 1981; Waters and Frisbie, 1995). Some of
the earliest work in facial animation represents this
split, with Parke’s work on the parameterisation of
faces balanced by Waters’ (Waters, 1987) attempt to
replicate facial movement by modelling the
movement of muscles. In the decades since this
pioneering work there has been considerable
research effort put into to generating realistic facial
animation, much of it reviewed in detail by Noh and
Neumann (1998) and Ersotelos (2008).
Of particular relevance are Lee et al.’s (1995)
efforts at digitising facial geometries and
automatically animating them through the dynamic
simulation of facial tissues and muscles. Their
approach was to construct functional models of the
heads of human subjects from laser-scanned range
and reflectance data. These models were then
extended with contractile muscles embedded within
a dynamic skin model. The result was automatic
animation of a human face from a scanned subject.
Noh and Neumann (Noh and Neumann, 2001)
made considerable advances within the field of
automatic character animation with their work on the
cloning of expressions. Their technique was one of
the first to directly address the problem of reusing
existing animations, and transferring them to newly
created virtual characters. After letting users select a
set of points of the surface of a model, their method
was able to transfer vertex motion vectors from a
source character to a target character, with the aid of
an automated heuristic correspondence search.
Orvalho et al. (Orvalho et al., 2006) extend this
concept by attempting to adapt a generic facial rig to
different facial models. The technique required
considerable labelling effort yet was able to find
corresponding points between source and target
faces. This point matching was then used as a basis
for the transfer of a complex, muscle-based facial rig
system, to enable the target face to replicate the
expressions provided by the base rig. Although this
technique is of some interest, the authors do no
present quantitative results, and only a few
qualitative images to prove the validity of their
method.
Despite these efforts, there is still substantial gap
in the state of the art that remains to be filled before
we reach a facial modelling system that fulfils all
four points mentioned above in section 1. In this
paper we present our efforts at addressing this gap,
with a highly automated system that is capable of
enabling artists to quickly create suitable facial
animation for a wide variety of face models.
3 THE MASKLE
3.1 Overview
The concept of the Maskle is to allow artists to
create a facial animation system once, yet be able to
re-apply it to as many characters as they desire. In
this sense, the keyword is that the system is
reusable. Unlike some of the related research
presented above, the goal of the system is not to
transfer an animation system from one face to
another; neither is it to automatically animate a face
based on a scan or photograph. Rather, it is designed
such that an artist can develop their own system of
facial animation and, once designed, quickly apply
this system to any number of characters that they
create.
The type of facial animation system that the
Maskle is based around is a standard bone-weight
system, where the deformation of the set of vertices
that form the skin of a model is controlled by the
movement of an underlying skeleton; the exact
movement of each vertex (proportional to the bones
of the skeleton) is represented by a set of numeric
proportions, or weights, assigned to each vertex. The
justification for basing our algorithm on a bone-
weight system, as opposed to other animation
systems such as blend shapes, is that the bone-
weight system can be abstracted to a number of
control points (representing the locations of the
bones); this abstraction facilitates the organisation of
weight-transfer algorithm presented below, and
allows rapid testing to ensure the results are
satisfactory.
Once an artist has created a character, bone-
weight animation of a face usually requires
extensive effort in accurately assigning, or painting,
the weights for each vertex and for each bone. This
can be done automatically, and many 3D design
packages such Autodesk Maya and 3DS Max have
such functionality, frequently based around using
envelope systems (Autodesk, 2007). However,
GRAPP 2009 - International Conference on Computer Graphics Theory and Applications
234