Li, H., Weise, T., and Pauly, M. (2010). Example-based
facial rigging. ACM Trans. Graph., 29(4).
MakeHuman (2021). Make-human. http://www.
makehumancommunity.org/. (Accessed on
11/02/2020).
Malatesta, L., Raouzaiou, A., Karpouzis, K., and Kollias, S.
(2009). Mpeg-4 facial expression synthesis. Personal
and Ubiquitous Computing, 13:77–83.
Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., and
Cohn, J. F. (2013). Disfa: A spontaneous facial ac-
tion intensity database. IEEE Transactions on Affec-
tive Computing, 4(2):151–160.
McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Tur-
cot, J., and Kaliouby, R. e. (2016). Affdex sdk: a
cross-platform real-time multi-face expression recog-
nition toolkit. In Proceedings of the 2016 CHI confer-
ence extended abstracts on human factors in comput-
ing systems, pages 3723–3726.
Mollahosseini, A., Hasani, B., and Mahoor, M. H. (2017).
Affectnet: A database for facial expression, valence,
and arousal computing in the wild. IEEE Transactions
on Affective Computing, 10(1):18–31.
Mori, M., MacDorman, K. F., and Kageki, N. (2012). The
uncanny valley [from the field]. IEEE Robotics & Au-
tomation Magazine, 19(2):98–100.
Moser, L., Hendler, D., and Roble, D. (2017). Masquer-
ade: fine-scale details for head-mounted camera mo-
tion capture data. In ACM SIGGRAPH 2017 Talks,
pages 1–2.
Motion, C. (2017). Realtime live [cubic motion web site].
MPEG (2021). Movie picture expert group. https://mpeg.
chiariglione.org/standards/mpeg-4.
Oquab, M., Bottou, L., Laptev, I., and Sivic, J. (2014).
Learning and transferring mid-level image represen-
tations using convolutional neural networks. In Pro-
ceedings of the IEEE conference on computer vision
and pattern recognition, pages 1717–1724.
Pandzic, I. S. and Forchheimer, R. (2003). MPEG-4 facial
animation: the standard, implementation and appli-
cations. John Wiley & Sons.
Pantic, M., Valstar, M., Rademaker, R., and Maat, L.
(2005). Web-based database for facial expression
analysis. In 2005 IEEE international conference on
multimedia and Expo, pages 5–pp. IEEE.
Pardas, M., Bonafonte, A., and Landabaso, J. L. (2002).
Emotion recognition based on mpeg-4 facial anima-
tion parameters. In 2002 IEEE International Con-
ference on Acoustics, Speech, and Signal Processing,
volume 4, pages IV–3624. IEEE.
Pasquariello, S. and Pelachaud, C. (2002). Greta: A simple
facial animation engine.
Radzihovsky, S., de Goes, F., and Meyer, M. (2020). Face-
baker: Baking character facial rigs with machine
learning. In ACM SIGGRAPH 2020 Talks, SIG-
GRAPH ’20, New York, NY, USA. Association for
Computing Machinery.
Ravikumar, S., Davidson, C., Kit, D., Campbell, N. D.,
Benedetti, L., and Cosker, D. (2016). Reading be-
tween the dots: Combining 3d markers and facs classi-
fication for high-quality blendshape facial animation.
In Graphics Interface, pages 143–151.
Reverdy, C., Gibet, S., and Larboulette, C. (2015). Optimal
marker set for motion capture of dynamical facial ex-
pressions. In Proceedings of the 8th ACM SIGGRAPH
Conference on Motion in Games, pages 31–36.
Roble, D., Hendler, D., Buttell, J., Cell, M., Briggs, J., Red-
dick, C., Iannazzo, L., Li, D., Williams, M., Moser,
L., et al. (2019a). Real-time, single camera, digital
human development. In ACM SIGGRAPH 2019 Real-
Time Live!, pages 1–1.
Roble, D., Hendler, D., Buttell, J., Cell, M., Briggs, J., Red-
dick, C., Iannazzo, L., Li, D., Williams, M., Moser,
L., Wong, C., Kachkovski, D., Huang, J., Zhang, K.,
McLean, D., Cloudsdale, R., Milling, D., Miller, R.,
Lawrence, J., and Chien, C. (2019b). Real-time, sin-
gle camera, digital human development. In ACM SIG-
GRAPH 2019 Real-Time Live!, SIGGRAPH ’19, New
York, NY, USA. Association for Computing Machin-
ery.
Rothkrantz, L., Datcu, D., and Wiggers, P. (2009). Facs-
coding of facial expressions. In Proceedings of the
International Conference on Computer Systems and
Technologies and Workshop for PhD Students in Com-
puting, CompSysTech ’09, New York, NY, USA. As-
sociation for Computing Machinery.
Seymour, M., Evans, C., and Libreri, K. (2017a). Meet
mike: epic avatars. In ACM SIGGRAPH 2017 VR Vil-
lage, pages 1–2.
Seymour, M., Riemer, K., and Kay, J. (2017b). Interactive
realistic digital avatars-revisiting the uncanny valley.
SouthKorea, G. Gx-lab. http://www.gxlab.co.kr/. (Accessed
on 11/02/2020).
Tinwell, A., Grimshaw, M., Nabi, D. A., and Williams, A.
(2011). Facial expression of emotion and perception
of the uncanny valley in virtual characters. Computers
in Human Behavior, 27(2):741–749.
Valente, S. and Dugelay, J.-L. (2000). Face tracking and re-
alistic animations for telecommunicant clones. IEEE
MultiMedia, 7(1):34–43.
van der Struijk, S., Huang, H.-H., Mirzaei, M. S., and
Nishida, T. (2018). Facsvatar: An open source mod-
ular framework for real-time facs based facial anima-
tion. In Proceedings of the 18th International Confer-
ence on Intelligent Virtual Agents, pages 159–164.
Villagrasa, S. and Sus
´
ın S
´
anchez, A. (2009). Face! 3d facial
animation system based on facs. In IV Iberoamerican
symposium in computer graphics, pages 203–209.
Zibrek, K., Kokkinara, E., and McDonnell, R. (2018). The
effect of realistic appearance of virtual characters in
immersive environments-does the character’s person-
ality play a role? IEEE transactions on visualization
and computer graphics, 24(4):1681–1690.
Zibrek, K. and McDonnell, R. (2014). Does render style
affect perception of personality in virtual humans? In
Proceedings of the ACM Symposium on Applied Per-
ception, pages 111–115.
HUCAPP 2022 - 6th International Conference on Human Computer Interaction Theory and Applications
162