These results can be explained by two doubt states
that appear frequently: the doubt between Fear and
Surprise and the doubt between Sadness and Anger
expressions. Interestingly, these expressions are also
notoriously difficult to discriminate for human
observers (Roy et al., 2007). Compared to the model
of Hammal et al., 2007 the introduction of the
temporal modeling of all the facial features
information leads to an average increase of 12% of
the performances. To better evaluate the quality of
the obtained results, the model performances are
compared with those of human observers on the
same data. 15 human observers were asked to
discriminate between the six basic facial expressions
on 80 videos randomly interleaved in 4 separate
blocks. Figure 11 reports the human performances
(grey bars). The human and model performances are
not significantly different (two-way ANOVA,
P>0.33).
7 CONCLUSIONS
The current paper proposes a model combining a
holistic and a feature-based processing for the
automatic recognition of facial expressions dealing
with asynchronous facial feature deformations and
multi-expression sequences. Compared to the static
results, the introduction of the transient features and
the temporal modeling of the facial features increase
the performances by 12% and compare favorably to
human observers. This opens promising perspectives
for the development of the model. For example,
preliminary results on spontaneous pain expression
recognition proved its suitability to generalize to
non-prototypic facial expressions
. A future direction
would be the synchronization of the facial and the
vocal modalities inside each detected emotional
segment and the definition of a fusion process
towards a bimodal model for multi-expression
recognition.
REFERENCES
Hammal Z., Couvreur L., Caplier A., Rombaut M., 2007.
Facial expressions classification: A new approach
based on transferable belief model. International
Journal of Approximate Reasoning, 46(3), 542-567.
Hammal Z., Eveno N., Caplier A., Coulon, P-Y., 2006.
Parametric models for facial features segmentation,
Signal processing, 86, 399-413.
Smith M., Cottrell G., Gosselin F. Schyns P.G, 2005.
Transmitting and decoding facial expressions of
emotions, Psychological Science, 16, 184–189.
Pantic M., Patras I., 2006. Dynamics of Facial Expression:
Recognition of Facial Actions and Their Temporal
Segments from Face Profile Image Sequences, IEEE
Trans. SMC- Part B, 36(2), 433-449.
Smets P., Kruse R., 1994. The transferable belief model,
Artificial Intelligence, 66, 191–234.
Tian Y., Kanade T., Cohn J.F., 2001. Recognizing Action
Units for Facial Expression Analysis, IEEE Trans.
PAMI, 23(2), 97-115.
Massot C., Herault J., 2008.Model of Frequency Analysis
in the Visual Cortex and the Shape from Texture
Problem, Int. Journal of Computer Vision, 76(2).
Denoeux T., 2008. Conjunctive and disjunctive
combination of belief functions induced by non-
distinct bodies of evidence, Artificial Intelligence,
172:234–264.
William Beaudot, 1994. Le traitement neuronal de
l'information dans la rétine des vertébrés : Un creuset
d'idées pour la vision artificielle, Thèse de Doctorat
INPG, Laboratoire TIRF, Grenoble (France).
Tian Y. L., Kanade T., Cohn J.F., 2005. Facial expression
analysis, In S” Z. Li & A.K. Jain (Eds), Handboock of
face recognition, 247-276. NY: Springer.
Littlewort G., Bartlett M. S., Fasel I., Susskind, J. &
Movellan J. 2006 Dynamics of facial expression
extracted automatically from video. J. Image Vis.
Comput. 24, 615–625.
M.F. Valstar and M. Pantic, 2007. Combined Support
Vector Machines and Hidden Markov Models for
Modeling Facial Action Temporal Dynamics, in Proc.
IEEE Workshop on Human Computer Interaction, Rio
de Janeiro, Brazil, 118-127.
Zhang Y., & Qiang, J. 2005. Active and dynamic
information fusion for facial expression understanding
from image sequences. IEEE Trans. PAMI, 27(5),
699–714.
Gralewski L., Campbell N. & Voak I. P. 2006 Using a
tensor framework for the analysis of facial dynamics.
Proc. IEEE Int. Conf. FG, 217–222.
Tong Y., Liao W. & Ji Q. 2007 Facial action unit
recognition by exploiting their dynamics and semantic
relationships. IEEE Trans. PAMI. 29, 1683–1699.
Pantic M., Valstar M.F., Rademaker R. & Maat L. 2005
Web-based database for facial expression analysis.
Proc. IEEE Int. Conf. ICME'05, Amsterdam, The
Netherlands, July.
HOLISTIC AND FEATURE-BASED INFORMATION TOWARDS DYNAMIC MULTI-EXPRESSIONS
RECOGNITION
309