Ahead of 3D characters animation, our methods are
suitable for emotion-based applications, like affective
virtual environments, advertising or emotional gam-
ing.
As future work, we aim to define a transfer al-
gorithm and use movements and emotions estimated
to trigger facial animation. Furthermore, we intend
to study how the estimation of more facial behaviors
information (e.g. forehead and eye movements) and
combination of speech data can improve the anima-
tion and user embodiment in VR environments.
ACKNOWLEDGEMENTS
This work is supported by Instituto de
Telecomunicac¸
˜
oes (Project Incentivo ref: Pro-
jeto Incentivo/EEI/LA0008/2014 and project UID
ref: UID/EEA/5008/2013) and University of Porto.
The authors would like to thanks Elena Kokkinara
from Trinity College Dublin and Pedro Mendes from
University of Porto for their support given in the
beginning of the project.
REFERENCES
Biocca, F. (1997). The cyborg’s dilemma: Progressive
embodiment in virtual environments. Journal of
Computer-Mediated Communication, 3(2):0–0.
Bombari, D., Schmid, P. C., Schmid Mast, M., Birri, S.,
Mast, F. W., and Lobmaier, J. S. (2013). Emotion
recognition: The role of featural and configural face
information. The Quarterly Journal of Experimental
Psychology, 66(12):2426–2442.
Breiman, L. (2001). Random forests. Machine learning,
45(1):5–32.
Cao, C., Hou, Q., and Zhou, K. (2014). Displaced dynamic
expression regression for real-time facial tracking and
animation. ACM Transactions on Graphics (TOG),
33(4):43.
Cao, C., Weng, Y., Lin, S., and Zhou, K. (2013). 3d shape
regression for real-time facial animation. ACM Trans.
Graph., 32(4):41.
Eisenbarth, H. and Alpers, G. W. (2011). Happy mouth
and sad eyes: scanning emotional facial expressions.
Emotion, 11(4):860.
Ekman, P. and Friesen, W. (1978). Facial Action Coding
System: A Technique for the Measurement of Facial
Movement. Consulting Psychologists Press, Palo Alto.
Ekman, P. and Friesen, W. V. (1975). Unmasking the face:
A guide to recognizing emotions from facial cues.
Fuentes, C. T., Runa, C., Blanco, X. A., Orvalho, V., and
Haggard, P. (2013). Does my face fit?: A face image
task reveals structure and distortions of facial feature
representation. PloS one, 8(10):e76805.
Jack, R. E. and Jack, R. E. (2013). Culture and facial ex-
pressions of emotion Culture and facial expressions of
emotion. Visual Cognition, 00(00):1–39.
Kilteni, K., Groten, R., and Slater, M. (2012). The sense of
embodiment in virtual reality. Presence: Teleopera-
tors and Virtual Environments, 21(4):373–387.
Lang, C., Wachsmuth, S., Hanheide, M., and Wersing, H.
(2012). Facial communicative signals. International
Journal of Social Robotics, 4(3):249–262.
Li, H., Trutoiu, L., Olszewski, K., Wei, L., Trutna, T.,
Hsieh, P.-L., Nicholls, A., and Ma, C. (2015). Fa-
cial performance sensing head-mounted display. ACM
Transactions on Graphics (Proceedings SIGGRAPH
2015), 34(4).
Li, H., Yu, J., Ye, Y., and Bregler, C. (2013). Realtime facial
animation with on-the-fly correctives. ACM Transac-
tions on Graphics, 32(4).
Loconsole, C., Runa Miranda, C., Augusto, G., Frisoli,
G., and Costa Orvalho, v. (2014). Real-time emo-
tion recognition: a novel method for geometrical fa-
cial features extraction. 9th International Joint Con-
ference on Computer Vision, Imaging and Computer
Graphics Theory and Applications (VISAPP 2014),
01:378–385.
Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar,
Z., and Matthews, I. (2010). The extended cohn-
kanade dataset (ck+): A complete dataset for action
unit and emotion-specified expression. In Computer
Vision and Pattern Recognition Workshops (CVPRW),
2010 IEEE Computer Society Conference on, pages
94–101. IEEE.
Magnenat-Thalmann, N., Primeau, E., and Thalmann, D.
(1988). Abstract muscle action procedures for human
face animation. The Visual Computer, 3(5):290–297.
McCloud, S. (1993). Understanding comics: The invisible
art. Northampton, Mass.
McCloud, S. (2006). Making Comics: Storytelling Secrets
Of Comics, Manga And Graphic Novels Author: Scott
McCloud, Publisher: William Morrow. William Mor-
row Paperbacks.
OpenCV (2014).
Pandzic, I. S. and Forchheimer, R. (2003). MPEG-4 facial
animation: the standard, implementation and appli-
cations. Wiley. com.
Parikh, R., Mathai, A., Parikh, S., Sekhar, G. C., and
Thomas, R. (2008). Understanding and using sensi-
tivity, specificity and predictive values. Indian journal
of ophthalmology, 56(1):45.
Parke, F. I. and Waters, K. (1996). Computer facial anima-
tion, volume 289. AK Peters Wellesley.
Pighin, F. and Lewis, J. (2006). Performance-driven facial
animation. In ACM SIGGRAPH.
R Core Team (2013). R: A Language and Environment for
Statistical Computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0.
Rodriguez, J., Perez, A., and Lozano, J. (2010). Sensitiv-
ity analysis of k-fold cross validation in prediction er-
ror estimation. Pattern Analysis and Machine Intelli-
gence, IEEE Transactions on, 32(3):569–575.
VISAPP 2016 - International Conference on Computer Vision Theory and Applications
498