Elicitation and Assessment, eds. Coan, J. A., and Allen,
J. B., Oxford University Press.
Darwin, C., Prodger, P. (1872/1998). The Expression of the
Emotions in Man and Animals. Oxford University
Press, USA.
Ekman, P. (Ed.). (2006). Darwin and Facial Expression: A
Century of Research in Review. Cambridge, MA:
Malor Books, Institute for the Study of Human
Knowledge.
Khorrami, P., Le Paine, T., Brady, K., Dagli, C. and Huang,
T.S., (2016). How Deep Neural Networks Can Improve
Emotion Recognition on Video Data, in IEEE
International Conference on Image Processing 2016,
New York, NY, USA: IEEE, pp. 619-623.
Kozasa, C, Fukutake, H., Notsu, H., Okada, Y., Niijima, K.,
(2006). Facial Animation Using Emotional Model,
International Conference on Computer Graphics,
Imaging and Visualization, pp. 428-433.
Lewinski, P., Den Uyl, T. M., Butler, C. (2014). Automated
Facial Coding: Validation of Basic Emotions and FACS
AUs in FaceReader. Journal of Neuroscience,
Psychology, and Economics 7.4 (2014): 227.
Loyall, A. B., (1997). Believable Agents: Building
Interactive Personalities (No. CMU-CS-97-123),
Carnegie-Mellon University, Department of Computer
Science, accessed 12 October 2022 at: https:
//www.cs.cmu.edu/afs/cs/project/oz/web/papers/CMU-
CS-97-123.pdf
Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z.,
Matthews, I. (2010). The Extended Cohn-Kanade
Dataset (CK+). In 2010 IEEE Computer Society
Conference on Computer Vision and Pattern
Recognition-Workshops. pp. 94-101.
Mascarenhas, S., Guimarães, M., Santos, P.A., Dias, J.,
Prada, R., Paiva, A., (2021). FAtiMA Toolkit -Toward
an Effective and Accessible Tool for the Development
of Intelligent Virtual Agents and Social Robots,”, arXiv
preprint arXiv:2103.03020.
Metallinou, A., Lee, C., Busso, C., Carnicke, S.,
Narayanan, S. (2010). The USC CreativeIT Database:
A Multimodal Database of Theatrical Improvisation. In
Proceedings of Multimodal Corpora: Advances in
Capturing, Coding and Analyzing Multimodality
Moore, S. (1984). The Stanislavski System: The
Professional Training of an Actor, Penguin Books, New
York, NY, USA, pp. 41-46.
Ortony, A., Clore, G. L., and Collins, A., (1990). The
Cognitive Structure of Emotions,” Cambridge, UK:
Cambridge University Press, pp. 34-58.
Paier, W., Hilsmann, A., and Eisert, P. (2021). Example-
Based Facial Animation of Virtual Reality Avatars
Using Auto-Regressive Neural Networks. IEEE
Computer Graphics and Applications, 41(4), pp. 52-63.
Paier, W., Hilsmann, A., and Eisert, P. (2020). Neural face
models for example-based visual speech synthesis. In
European Conference on Visual Media Production, pp.
1-10.
Posner, J., Russell, J. A., and Peterson, B. S. (2005). The
circumplex model of affect: An integrative approach to
affective neuroscience, cognitive development, and
psychopathology. Development and psychopathology,
17(3), 715-734.
Schiffer, S., Zhang, S., Levine, M. (2022). Facial Emotion
Expression Corpora for Training Game Character
Neural Network Models. VISIGRAPP.
Skiendziel, T., Rösch, A. G., Schultheiss, O.C. (2019).
Assessing the Convergent Validity Between Noldus
FaceReader 7 and Facial Action Coding System
Scoring. In PloS one 14.10 (2019): e0223905.
Soleymani, M., Larson, M., Pun, T., and Hanjalic, A.
(2014). Corpus Development for Affective Video
Indexing. In IEEE Transactions on Multimedia, 16(4),
pp. 1075-1089.
Suwajanakorn, S., Seitz, S. M., Kemelmacher-Shlizerman,
I. (2017). Synthesizing Obama: Learning Lip Sync
from Audio. ACM Transactions on Graphics, 36(4), pp.
1-13.
Vidal, A. Salman, A. Lin, W., Busso, C. (2020). MSP- Face
Corpus: A Natural Audiovisual Emotional Database. In
Proceedings of the 2020 International Conference on
Multimodal Interaction, pp. 397-405.
Wingenbach, T., Ashwin, C., Brosnan, M. (2016).
Validation of the Amsterdam Dynamic Facial
expression set – Bath Intensity Variation (ADFES-
BIV), In PLoS ONE 11(1): e0147112.