
Gjoreski, M., Kiprijanovska, I., Stankoski, S., Mavridou,
I., Broulidakis, M. J., Gjoreski, H., and Nduka, C.
(2022). Facial EMG sensing for monitoring af-
fect using a wearable device. Scientific Reports,
12(1):16876. Publisher: Nature Publishing Group.
Grahlow, M., Rupp, C. I., and Derntl, B. (2022). The impact
of face masks on emotion recognition performance
and perception of threat. 17(2):e0262840. Publisher:
Public Library of Science.
Guntinas-Lichius, O., Trentzsch, V., Mueller, N., Hein-
rich, M., Kuttenreich, A.-M., Dobel, C., Volk, G. F.,
Graßme, R., and Anders, C. (2023). High-resolution
surface electromyographic activities of facial muscles
during the six basic emotional expressions in healthy
adults: a prospective observational study. Scientific
Reports, 13(1):19214. Publisher: Nature Publishing
Group.
Hjortsj
¨
o, C.-H. (1969). Man’s Face and Mimic Language.
Studentlitteratur, Lund, Sweden.
Jin, B., Qu, Y., Zhang, L., and Gao, Z. (2020). Diagnosing
parkinson disease through facial expression recogni-
tion: Video analysis. 22(7):e18697. Company: Jour-
nal of Medical Internet Research Distributor: Journal
of Medical Internet Research Institution: Journal of
Medical Internet Research Label: Journal of Medical
Internet Research Publisher: JMIR Publications Inc.,
Toronto, Canada.
Kappas, A., Krumhuber, E., and K
¨
uster, D. (2013). Facial
behavior, pages 131–165.
Kastendieck, T., Zillmer, S., and Hess, U. (2022). (un)mask
yourself! effects of face masks on facial mimicry
and emotion perception during the COVID-19 pan-
demic. 36(1):59–69. Publisher: Routledge eprint:
https://doi.org/10.1080/02699931.2021.1950639.
Kleiner, M., Wallraven, C., Breidt, M., Cunningham, D. W.,
and B
¨
ulthoff, H. H. (2004). Multi-viewpoint video
capture for facial perception research. In Workshop on
Modelling and Motion Capture Techniques for Virtual
Environments (CAPTECH 2004), Geneva, Switzer-
land.
Kołodziej, M., Majkowski, A., and Jurczak, M. (2024).
Acquisition and Analysis of Facial Electromyo-
graphic Signals for Emotion Recognition. Sensors,
24(15):4785. Number: 15 Publisher: Multidisci-
plinary Digital Publishing Institute.
Krumhuber, E. G., K
¨
uster, D., Namba, S., and Skora,
L. (2021). Human and machine validation of 14
databases of dynamic facial expressions. Behavior Re-
search Methods, 53(2):686–701.
Krumhuber, E. G., Skora, L. I., Hill, H. C. H., and Lander,
K. (2023). The role of facial movements in emotion
recognition. Nature Reviews Psychology, 2(5):283–
296.
K
¨
uster, D., Krumhuber, E. G., Steinert, L., Ahuja, A.,
Baker, M., and Schultz, T. (2020). Opportunities and
challenges for using automatic human affect analy-
sis in consumer research. Frontiers in neuroscience,
14:400.
Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M.,
Movellan, J., and Bartlett, M. (2011). The computer
expression recognition toolbox (CERT). In 2011 IEEE
International Conference on Automatic Face & Ges-
ture Recognition (FG), pages 298–305.
Mattavelli, G., Barvas, E., Longo, C., Zappini, F., Ottaviani,
D., Malaguti, M. C., Pellegrini, M., and Papagno, C.
(2021). Facial expressions recognition and discrimi-
nation in parkinson’s disease. 15(1):46–68.
Mauss, I. B. and Robinson, M. D. (2009). Measures of emo-
tion: A review. Cognition & Emotion, 23(2):209–237.
Namba, S., Sato, W., Osumi, M., and Shimokawa, K.
(2021a). Assessing Automated Facial Action Unit De-
tection Systems for Analyzing Cross-Domain Facial
Expression Databases. Sensors, 21(12):4222.
Namba, S., Sato, W., and Yoshikawa, S. (2021b). Viewpoint
Robustness of Automated Facial Action Unit Detec-
tion Systems. Applied Sciences, 11(23):11171.
Oh Kruzic, C., Kruzic, D., Herrera, F., and Bailenson, J.
(2020). Facial expressions contribute more than body
movements to conversational outcomes in avatar-
mediated virtual environments. Scientific Reports,
10(1):20626.
Ortony, A. (2022). Are All “Basic Emotions” Emotions? A
Problem for the (Basic) Emotions Construct. Perspec-
tives on Psychological Science, 17(1):41–61.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Schuetz, I. and Fiehler, K. (2022). Eye tracking in virtual
reality: Vive pro eye spatial accuracy, precision, and
calibration reliability. Journal of Eye Movement Re-
search, 15(3). Number: 3.
Schuller, B., Valster, M., Eyben, F., Cowie, R., and Pantic,
M. (2012). AVEC 2012: the continuous audio/visual
emotion challenge. In Proceedings of the 14th ACM
international conference on Multimodal interaction,
ICMI ’12, pages 449–456, New York, NY, USA. As-
sociation for Computing Machinery.
Schultz, T., Angrick, M., Diener, L., K
¨
uster, D., Meier,
M., Krusienski, D. J., Herff, C., and Brumberg, J. S.
(2019). Towards restoration of articulatory move-
ments: Functional electrical stimulation of orofacial
muscles. In 2019 41st Annual International Confer-
ence of the IEEE Engineering in Medicine and Biol-
ogy Society (EMBC), pages 3111–3114.
Schultz, T. and Maedche, A. (2023). Biosignals meet Adap-
tive Systems. SN Applied Sciences, 5(9):234.
Sonawane, B. and Sharma, P. (2021). Review of automated
emotion-based quantification of facial expression in
parkinson’s patients. 37(5):1151–1167.
Steinert, L., Putze, F., K
¨
uster, D., and Schultz, T. (2021).
Audio-visual recognition of emotional engagement of
people with dementia. In Interspeech, pages 1024–
1028.
Tassinary, L. G., Cacioppo, J. T., and Vanman, E. J. (2007).
The Skeletomotor System: Surface Electromyogra-
phy. In Cacioppo, J. T., Tassinary, L. G., and Berntson,
The Bigger the Better? Towards EMG-Based Single-Trial Action Unit Recognition of Subtle Expressions
109