
Table 2 – continued from previous page.
Type Data flow Identified threats Threat description
Conv Data Engi-
neering (P),
Performance
Monitoring (P),
Central Health-
care System
(P)
Data confidentiality
threats
Data confidentiality threats in healthcare involve
unauthorized access or disclosure of sensitive med-
ical data. Attackers could use backdoors, malware
or software vulnerabilities to compromise the pro-
cess. This could lead to potential privacy breaches
and misuse of personal information (Gary Mc-
Graw, 2020)
Adv Model Training
(P), Model Tun-
ing (P)
Model reprogram-
ming
Model reprogramming involves altering health-
care AI/ML models to produce incorrect or bi-
ased outputs. Attackers could achieve access to the
model from weak access control policies and mali-
ciously fine-tune the model. This could pose risks
to patient safety and treatment effectiveness (An-
drew Marshall, 2022)
itoring devices, sends it to central servers, and facili-
tates its analysis with ML models. Our particular em-
phasis is on ML-based system assets. We created a
system model covering the entire ML model develop-
ment life cycle and systematically elicited threats. In
the future, we aim to extend this study by prioritiz-
ing threats and identifying relevant countermeasures.
Privacy threat modeling of such systems is another re-
search direction.
REFERENCES
Ali Alatwi, H. and Morisset, C. (2022). Threat modeling
for machine learning-based network intrusion detec-
tion systems. In 2022 IEEE International Conference
on Big Data (Big Data), pages 4226–4235.
Andrew Marshall, J. P. (2022). Threat modeling ai/ml sys-
tems and dependencies.
Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D.,
Pierazzi, F., and Roundy, K. (2023). “real attack-
ers don’t compute gradients”: bridging the gap be-
tween adversarial ml research and practice. In 2023
IEEE Conference on Secure and Trustworthy Machine
Learning (SaTML), pages 339–364. IEEE.
Cagnazzo, M., Hertlein, M., Holz, T., and Pohlmann, N.
(2018). Threat modeling for mobile health systems. In
2018 IEEE Wireless Communications and Networking
Conference Workshops (WCNCW), pages 314–319.
Chandrasekaran, V., Chaudhuri, K., Giacomelli, I., Jha,
S., and Yan, S. (2020). Exploring connections be-
tween active learning and model extraction. In 29th
USENIX Security Symposium (USENIX Security 20),
pages 1309–1326.
Deng, M., Wuyts, K., Scandariato, R., Preneel, B., and
Joosen, W. (2011). A privacy threat analysis frame-
work: supporting the elicitation and fulfillment of
privacy requirements. Requirements Engineering,
16(1):3–32.
Dr. Larysa Visengeriyeva, Anja Kammer, I. B. (2023).
Mlops principles. Last accessed 12.05.2024.
Estonian Information Systems Authority (2024). Tehis-
intellekti ja masin
˜
oppe tehnoloogia riskide ja nende
leevendamise v
˜
oimaluste uuring.
European Union Agency for Cybersecurity (ENISA)
(2020). Artificial intelligence cybersecurity chal-
lenges.
Gary McGraw, H. F. (2020). An architectural risk analy-
sis of machine learning systems: Toward more secure
machine learning.
Holik, F., Yeng, P., and Fauzi, M. A. (2023). A compara-
tive assessment of threat identification methods in ehr
systems. In Proceedings of the 8th International Con-
ference on Sustainable Information Engineering and
Technology, pages 529–537.
Jagielski, M., Oprea, A., and Biggio (2018). Manipulating
machine learning: Poisoning attacks and countermea-
sures for regression learning. In 2018 IEEE Sympo-
sium on Security and Privacy (SP), pages 19–35.
Kakhi, K., Alizadehsani, R., Kabir, H. D., Khosravi, A., Na-
havandi, S., and Acharya, U. R. (2022). The internet
of medical things and artificial intelligence: trends,
challenges, and opportunities. Biocybernetics and
Biomedical Engineering, 42(3):749–771.
Khalil, S. M., Bahsi, H., and Kor
˜
otko, T. (2023). Threat
modeling of industrial control systems: A systematic
literature review. Computers & Security, page 103543.
Latif, S., Rana, R., Qadir, J., Ali, A., Imran, M., and Younis,
S. (2017). Mobile health in the developing world: Re-
view of literature and lessons from a case study. IEEE
Access, PP:1–1.
Mauri, L. and Damiani, E. (2022). Modeling threats to ai-
ml systems using stride. Sensors, 22(17).
Min, S., Lee, B., and Yoon, S. (2017). Deep learn-
ing in bioinformatics. Briefings in bioinformatics,
18(5):851–869.
Mozaffari-Kermani, M. and Sur-Kolay (2015). Systematic
poisoning attacks on and defenses for machine learn-
HEALTHINF 2025 - 18th International Conference on Health Informatics
328