improving its UI
Through this work we would also like to highlight
the importance of developing muscle-based models
to represent the anatomy of the human face more
accurately in order to generate realistic expressions.
The rest of the paper is organised as follows: In
section 2 we review related work in the field of
facial modelling. Section 3 describes our Facial
Model component FAS. In section 4, we present the
implementation and test results followed by
conclusion in section 5.
2 RELATED WORK
Initial efforts in 3D facial modelling began with
(Parke, 1972) who developed the first parameterized
facial model. (Waters, 1987) and (Magnenat-
Thalmann et al., 1988) follow soon by developing
pseudo muscle based models. The technique of free
form deformations were also used to create facial
expressions for example in (Kalra, et al., 1992).
(Kahler et al., 2001) utilised geometry based muscle
modelling for facial animation whereas (Ostermann,
1998) used the MPEG-4 standard to animate
synthetic faces.
The state of the art in facial animation has
advanced tremendously over the years. There is
increasing effort by researchers to perfect the facial
model to produce quality and realistic expressions.
As a consequence some new and hybrid methods are
continuously being developed using FACS (Tolba et
al., 2018). For example an interactive region-based
linear 3D face model, developed by (Tena et al.,
2011) effectively divided the facial model into
regions with user-given constraints in the form of
markers placed on the face model. These markers
when activated affected only the region in which
they were placed and produced better effects than
using the same markers on a region less model.
(Lewis et al., 2014) in their paper discuss the
practice and theory of blend shape models. Real time
performance based facial animation that enables the
user to control the facial expression of a digital
avatar in real-time is proposed by (Weise et al.,
2011). (Pauly, 2013) and (Bermano et al., 2013)
proposed similar facial performance based models.
There has also been success in producing facial
animation from video or motion capture data. A
paper by (Sifakis et al., 2005) described an
implementation where muscle activations were
automatically determined from motion capture
marker data. Motion detectors or sensors that are
susceptible to movements can be utilised to detect
facial expressions and then animated on a computer
facial model.
Recent research has shown a renewed interest in
using FACS as the basis for generating and
measuring facial expressions. For example (Alkawaz
et al., 2015) combined blend shape interpolation
(BSI) and FACS to create realistic and expressive
facial animation. They trialled their system on four
basic emotions; anger, happy, sad, and fear defined
in FACS. (Tolba et al., 2018) discussed the
applications and limitations of FACS by comparing
with MPEG-4 facial animation based systems.
Articles such as (Ping et al., 2013; Li et al., 2012;
Sandbach et al., 2012; Ersotelos and Dong, 2008;
Alkawaz et al., 2014) provide a good survey of
techniques developed in the area of facial modelling
in recent times while (Noh and Neumann, 1998)
provides a review on theories and techniques
developed decades ago which are a foundation of
facial modelling techniques of the modern world.
3 FACIAL MODEL IN FAS
In this research we use the Facial Animation System
(FAS) described in (Kumar and Vanualailai, 2016).
The original Waters face geometry is retained and
used in the Facial Model component in FAS. It is
based on facial anatomy which dictates that facial
expressions are created by relaxation and tightening
of facial muscles. These muscles on the face can be
categorized into three types: linear muscles, sheet
muscles and sphincter muscles. Mixtures of these
muscle types coordinate to produce each facial
expression.
Waters modelled two types of muscles: linear
muscles that pull vertices under its region of
influence towards the point of attachment on the
skin; and a sphincter muscle around the mouth that
pulls the vertices under its region of influence
towards the center of an ellipse. The third muscle
type (sheet muscles) was not implemented in the
original Waters model but was described in detail in
(Zhang et al., 2001). The equations we use in this
paper are drawn from (Tanguy, 2001; Edge and
Maddock, 2001) which is based on the original
Waters model publication, but expressed in a
simplified form for understanding. The geometry
comprises of 236 interconnected vertices forming
876 triangular polygons. The reader is directed to
(Waters, 1987) to inspect the original visual
representation of the muscles and the facial model.
In our Facial Model we reduced the number of
linear muscles from 24 to 18. Figure shows the