
Larsen, J. T., Norris, C. J., and Cacioppo, J. T.
(2003). Effects of positive and negative af-
fect on electromyographic activity over zy-
gomaticus major and corrugator supercilii.
Psychophysiology, 40(5):776–785. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/1469-
8986.00078.
Lewinski, P., den Uyl, T. M., and Butler, C. (2014). Auto-
mated facial coding: Validation of basic emotions and
FACS AUs in FaceReader. Journal of Neuroscience,
Psychology, and Economics, 7(4):227–236.
Li, X., Zhang, X., Yang, H., Duan, W., Dai, W., and
Yin, L. (2020). An EEG-Based Multi-Modal Emo-
tion Database with Both Posed and Authentic Facial
Actions for Emotion Analysis. In 2020 15th IEEE In-
ternational Conference on Automatic Face and Ges-
ture Recognition (FG 2020), pages 336–343, Buenos
Aires, Argentina. IEEE.
Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M.,
Movellan, J., and Bartlett, M. (2011). The computer
expression recognition toolbox (CERT). In 2011 IEEE
International Conference on Automatic Face & Ges-
ture Recognition (FG), pages 298–305.
Liu, H. (2021). Biosignal processing and activity model-
ing for multimodal human activity recognition. PhD
thesis, University of Bremen.
Liu, H., Hartmann, Y., and Schultz, T. (2021a). CSL-
SHARE: A multimodal wearable sensor-based human
activity dataset. Frontiers in Computer Science, 3:90.
Liu, H., Hartmann, Y., and Schultz, T. (2021b). Motion
Units: Generalized sequence modeling of human ac-
tivities for sensor-based activity recognition. In 29th
European Signal Processing Conference (EUSIPCO
2021). IEEE.
Liu, H., Jiang, K., Gamboa, H., Xue, T., and Schultz, T.
(2022). Bell shape embodying zhongyong: The pitch
histogram of traditional chinese anhemitonic penta-
tonic folk songs. Applied Sciences, 12(16).
Liu, H. and Schultz, T. (2018). ASK: A framework for data
acquisition and activity recognition. In Proceedings of
the 11th International Joint Conference on Biomedi-
cal Engineering Systems and Technologies (BIOSTEC
2018) - Volume 3: BIOSIGNALS, pages 262–268.
Liu, H. and Schultz, T. (2019). A wearable real-time hu-
man activity recognition system using biosensors inte-
grated into a knee bandage. In Proceedings of the 12th
International Joint Conference on Biomedical Engi-
neering Systems and Technologies (BIOSTEC 2019) -
Volume 1: BIODEVICES, pages 47–55.
Liu, H. and Schultz, T. (2022). How long are various types
of daily activities? statistical analysis of a multimodal
wearable sensor-based human activity dataset. In Pro-
ceedings of the 15th International Joint Conference
on Biomedical Engineering Systems and Technologies
(BIOSTEC 2022) - Volume 5: HEALTHINF, pages
680–688.
Liu, H., Xue, T., and Schultz, T. (2023). On a real real-time
wearable human activity recognition system. In Pro-
ceedings of the 16th International Joint Conference
on Biomedical Engineering Systems and Technologies
(BIOSTEC 2023) - WHC, pages 711–720.
Mauss, I. B. and Robinson, M. D. (2009). Measures of emo-
tion: A review. Cognition & Emotion, 23(2):209–237.
Namba, S., Sato, W., Osumi, M., and Shimokawa, K.
(2021a). Assessing Automated Facial Action Unit De-
tection Systems for Analyzing Cross-Domain Facial
Expression Databases. Sensors, 21(12):4222.
Namba, S., Sato, W., and Yoshikawa, S. (2021b). Viewpoint
Robustness of Automated Facial Action Unit Detec-
tion Systems. Applied Sciences, 11(23):11171.
Noah, T., Schul, Y., and Mayo, R. (2018). When both
the original study and its failed replication are cor-
rect: Feeling observed eliminates the facial-feedback
effect. Journal of Personality and Social Psychology,
114(5):657–664.
Oh Kruzic, C., Kruzic, D., Herrera, F., and Bailenson, J.
(2020). Facial expressions contribute more than body
movements to conversational outcomes in avatar-
mediated virtual environments. Scientific Reports,
10(1):20626.
Ortony, A. (2022). Are All “Basic Emotions” Emotions? A
Problem for the (Basic) Emotions Construct. Perspec-
tives on Psychological Science, 17(1):41–61.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Perusquia-Hernandez, M., Dollack, F., Tan, C. K., Namba,
S., Ayabe-Kanamura, S., and Suzuki, K. (2021). Smile
Action Unit detection from distal wearable Elec-
tromyography and Computer Vision. In 2021 16th
IEEE International Conference on Automatic Face
and Gesture Recognition (FG 2021), pages 1–8, Jodh-
pur, India. IEEE.
Rodrigues, J., Liu, H., Folgado, D., Belo, D., Schultz,
T., and Gamboa, H. (2022). Feature-based informa-
tion retrieval of multimodal biosignals with a self-
similarity matrix: Focus on automatic segmentation.
Biosensors, 12(12).
Schuller, B., Valster, M., Eyben, F., Cowie, R., and Pantic,
M. (2012). AVEC 2012: the continuous audio/visual
emotion challenge. In Proceedings of the 14th ACM
international conference on Multimodal interaction,
ICMI ’12, pages 449–456, New York, NY, USA. As-
sociation for Computing Machinery.
Schultz, T. (2010). Facial Expression Recognition using
Surface Electromyography.
Schultz, T., Angrick, M., Diener, L., K
¨
uster, D., Meier,
M., Krusienski, D. J., Herff, C., and Brumberg, J. S.
(2019). Towards restoration of articulatory move-
ments: Functional electrical stimulation of orofacial
muscles. In 2019 41st Annual International Confer-
ence of the IEEE Engineering in Medicine and Biol-
ogy Society (EMBC), pages 3111–3114.
Steinert, L., Putze, F., K
¨
uster, D., and Schultz, T. (2021).
Audio-visual recognition of emotional engagement of
people with dementia. In Interspeech, pages 1024–
1028.
BIODEVICES 2024 - 17th International Conference on Biomedical Electronics and Devices
150