
adversarial machine learning security domain. As
LLMs are part of integrated systems in real-world im-
plementations, threat actors can resort to conventional
cyber threats (e.g., eavesdropping, denial of service)
against the attack surfaces. Threat actors can combine
various threat types in a single attack campaign. Thus,
it is imperative to have a holistic understanding of all
three threat landscapes in a unified threat modeling
framework. Such a perspective has not been reflected
in existing threat modeling studies.
This paper demonstrates how the widely used
STRIDE threat modeling approach can be adapted to
systems with LLM-based components. Specifically,
we illustrate an LLM-based system using a data flow
diagram (DFD) for a healthcare system case study.
We then identify security boundaries and systemati-
cally elicit threats relevant to the system, focusing on
adversarial, conversational, and conventional threats.
The unique contribution lies in our detailed demon-
stration of system modeling for the LLM-based case
study and systematic threat elicitation, integrating
both the LLM and cybersecurity perspectives.
Despite the importance of threat modeling, there
is a notable scarcity of research focusing on LLM-
based systems. This is especially true for studies ad-
dressing system modeling and systematic threat elici-
tation. Effective threat modeling for LLM-integrated
healthcare systems necessitates combining conven-
tional, adversarial, and LLM-specific conversational
threats. Our paper addresses this research gap by of-
fering a structured approach to threat modeling that
considers the unique challenges posed by LLMs in
healthcare. By doing so, we provide a valuable frame-
work for securing LLM-integrated systems, an area
that remains underexplored in the current literature.
The paper is organized as follows: Section 2 re-
views related work, Section 3 outlines the study’s
methodology, Section 4 presents the case study re-
sults, Section 5 discusses key findings, and Section
6 concludes the paper.
2 RELATED WORK
Research on large language model security and ro-
bustness has rapidly advanced, uncovering a wide
range of vulnerabilities that threaten their safe usage
across various domains. Attacks such as prompt in-
jections—jailbreaks have been a focus as adversaries
manipulate inputs to elicit malicious outputs from the
models. Many studies have classified and compre-
hensively evaluated how these attacks are applied so
that even minimal manipulations can induce harmful
behaviors (Chu et al., 2024)(Deng et al., 2024)(Gre-
shake et al., 2023)(Rossi et al., 2024). In addition, the
integration of third-party LLM plugins and APIs in
the application raises new vulnerabilities, such as pro-
prietary information leakage and malicious exploita-
tion (Iqbal et al., 2024)(Zhao et al., 2024)(Finlayson
et al., 2024). Data contamination is a critical threat
resulting from the manipulation of training data, lead-
ing to degraded model performance and enabling the
extraction of proprietary knowledge without access to
the original dataset (Pathmanathan et al., 2024)(Bal-
loccu et al., 2024)(Truong et al., 2021).
In healthcare, LLMs are used for tasks like medi-
cal knowledge retrieval and personalization of treat-
ments (Abbasian et al., 2024)(Tang et al., 2024),
but their strengths also make them easy to misuse,
such as spreading false medical advice, misinforma-
tion, or unauthorized access to sensitive patient data
(Shah et al., 2024)(Chen and Shu, 2024). Stud-
ies have shown that sophisticated attack vectors like
indirect prompt injection can remotely compromise
LLM-integrated healthcare applications, leading to
severe consequences such as data breaches and the
undermining of trust in medical systems (Chu et al.,
2024)(Greshake et al., 2023).
To mitigate these threats, frameworks such as
STRIDE (Tete, 2024) and risk assessments (Panka-
jakshan et al., 2024) are used to identify and evaluate
risks. However, despite using STRIDE & DREAD,
Tete’s framework focuses on high-level threat iden-
tification without linking threats to specific system
components (Tete, 2024). Similarly, Pankajakshan et
al. emphasize broad risk categorization across stake-
holders but lack detailed, component-specific threat
elicitation (Pankajakshan et al., 2024). Both ap-
proaches fail to provide the structured system mod-
eling needed to identify which components introduce
specific threats, limiting their effectiveness for tar-
geted security measures.
The categorization of LLM threats is provided by
resources such as MITRE ATLAS (MITRE, 2024),
the OWASP Top 10 LLM threats (OWASP, 2024),
and NIST Adversarial ML (Vassilev et al., 2023).
These frameworks highlight the importance of align-
ing LLMs with human values & adhering to regu-
latory standards, such as the European Union’s AI
Act and the NIST AI Risk Management Framework,
to ensure ethical & secure deployment (Tang et al.,
2024). These resources are instrumental knowledge
bases for threat modeling.
Our paper uniquely advances the field by focusing
on structured system modeling and systematic threat
elicitation, pinpointing which components generate
specific threats. Unlike prior studies, we comprehen-
sively address common cyberattack vectors, adversar-
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
326