
calculate a separate pipeline for each signal and then
fuse them. This procedure provides better accuracies
and F1-scores than fusing the data at the beginning.
The next step is to analyse more data from other
care professionals and investigate which sensor posi-
tion is most relevant. As soon as several participants
have taken part, a leave one subject out cross valida-
tion can be carried out instead of a 10-fold cross val-
idation in order to check how accurate the model is
with an unknown participant. Furthermore, the data
from the Kinect camera will be integrated into the ac-
tivity recognition to check whether this can improve
recognition and the F1 score.
ACKNOWLEDGEMENTS
This study was supported by the Lower Saxony Min-
istry for Science and Culture with funds from the gov-
ernmental funding initiative zukunft.niedersachsen
of the Volkswagen Foundation, project ”Data-driven
health (DEAL)”.
REFERENCES
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z.,
Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin,
M., Ghemawat, S., Goodfellow, I., Harp, A., Irving,
G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kud-
lur, M., Levenberg, J., Man
´
e, D., Monga, R., Moore,
S., Murray, D., Olah, C., Schuster, M., Shlens, J.,
Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Van-
houcke, V., Vasudevan, V., Vi
´
egas, F., Vinyals, O.,
Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and
Zheng, X. (2015). TensorFlow: Large-scale machine
learning on heterogeneous systems. Software avail-
able from tensorflow.org.
Bevilacqua, A., MacDonald, K., Rangarej, A., Widjaya, V.,
Caulfield, B., and Kechadi, T. (2019). Human Activ-
ity Recognition with Convolutional Neural Networks.
In Brefeld, U., Curry, E., Daly, E., MacNamee, B.,
Marascu, A., Pinelli, F., Berlingerio, M., and Hurley,
N., editors, Machine Learning and Knowledge Dis-
covery in Databases, pages 541–552, Cham. Springer
International Publishing.
Bhatt, D., Patel, C., Talsania, H., Patel, J., Vaghela, R.,
Pandya, S., Modi, K., and Ghayvat, H. (2021). CNN
Variants for Computer Vision: History, Architecture,
Application, Challenges and Future Scope. Electron-
ics, 10(20):2470.
Bruns, F. T., Pauls, A., Koppelin, F., and Wallhoff, F.
(2024). Activity Recognition of Nursing Tasks in a
Hospital: Requirements and Challenges. In Salvi, D.,
Van Gorp, P., and Shah, S. A., editors, Pervasive Com-
puting Technologies for Healthcare, pages 235–243,
Cham. Springer Nature Switzerland.
Ersavas, T., Smith, M. A., and Mattick, J. S. (2024). Novel
applications of Convolutional Neural Networks in the
age of Transformers. Scientific Reports, 14(1).
Ferrari, A., Micucci, D., Mobilio, M., and Napoletano, P.
(2020). On the Personalization of Classification Mod-
els for Human Activity Recognition. IEEE Access,
8:32066–32079.
Gholamiangonabadi, D., Kiselov, N., and Grolinger, K.
(2020). Deep Neural Networks for Human Activ-
ity Recognition With Wearable Sensors: Leave-One-
Subject-Out Cross-Validation for Model Selection.
IEEE Access, 8:133982–133994.
Inoue, S., Hamdhana, D., Garcia, C., Kaneko, H., Nahid,
N., Hossain, T., Alia, S. S., and Lago, P. (2022). 4th
Nurse Care Activity Recognition Challenge Datasets.
Joukes, E., Abu-Hanna, A., Cornet, R., and de Keizer,
N. F. (2018). Time Spent on Dedicated Patient Care
and Documentation Tasks Before and After the Intro-
duction of a Structured and Standardized Electronic
Health Record. Appl Clin Inform, 09(01):046–053.
Kaczmarek, S., Fiedler, M., Bongers, A., Wibbeling, S.,
and Grzeszick, R. (2023). Dataset and Methods for
Recognizing Care Activities. In Proceedings of the
7th International Workshop on Sensor-Based Activity
Recognition and Artificial Intelligence, iWOAR ’22,
New York, NY, USA. Association for Computing Ma-
chinery.
Konak, O., Wischmann, A., van De Water, R., and Arnrich,
B. (2023). A Real-time Human Pose Estimation Ap-
proach for Optimal Sensor Placement in Sensor-based
Human Activity Recognition. In Proceedings of the
8th International Workshop on Sensor-Based Activity
Recognition and Artificial Intelligence, iWOAR ’23,
New York, NY, USA. Association for Computing Ma-
chinery.
Lago, P., Alia, S. S., Takeda, S., Mairittha, T., Mairit-
tha, N., Faiz, F., Nishimura, Y., Adachi, K., Okita,
T., Charpillet, F., and Inoue, S. (2019). Nurse care
activity recognition challenge: summary and results.
In Adjunct Proceedings of the 2019 ACM Interna-
tional Joint Conference on Pervasive and Ubiquitous
Computing and Proceedings of the 2019 ACM Inter-
national Symposium on Wearable Computers, Ubi-
Comp/ISWC ’19 Adjunct, page 746–751, New York,
NY, USA. Association for Computing Machinery.
Lecun, Y. and Bengio, Y. (1995). Convolutional Networks
for Images, Speech, and Time-Series.
Microsoft (2024). Azure kinect dk hardware specifi-
cations. https://learn.microsoft.com/en-us/previous-
versions/azure/kinect-dk/hardware-specification, Last
accessed on 2024-11-07.
Movella Inc. (2024). DOT. https://www.movella.com/
products/wearables/movella-dot, Last accessed on
2024-11-07.
Moy, A. J., Schwartz, J. M., Chen, R., Sadri, S., Lucas, E.,
Cato, K. D., and Rossetti, S. C. (2021). Measurement
of clinical documentation burden among physicians
and nurses using electronic health records: a scoping
review. Journal of the American Medical Informatics
Association, 28(5):998–1008.
HEALTHINF 2025 - 18th International Conference on Health Informatics
732