How to Model Privacy Threats in the Automotive Domain
Mario Raciti
1,2 a
and Giampaolo Bella
2 b
1
IMT School for Advanced Studies Lucca, Lucca, Italy
2
Dipartimento di Matematica e Informatica, Universit
`
a di Catania, Catania, Italy
Keywords:
Threat Modelling, Risk Assessment, Automotive, Web, LINDDUN.
Abstract:
This paper questions how to approach threat modelling in the automotive domain at both an abstract level that
features no domain-specific entities such as the CAN bus and, separately, at a detailed level. It addresses such
questions by contributing a systematic method that is currently affected by the analyst’s subjectivity because
most of its inner operations are only defined informally. However, this potential limitation is overcome when
candidate threats are identified and left to everyone’s scrutiny. The systematic method is demonstrated on
the established LINDDUN threat modelling methodology with respect to 4 pivotal works on privacy threat
modelling in automotive. As a result, 8 threats that the authors deem not representable in LINDDUN are iden-
tified and suggested as possible candidate extensions to LINDDUN. Also, 56 threats are identified providing
a detailed, automotive-specific model of threats.
1 INTRODUCTION
The world has realised that a whole new range of ser-
vices tailored to car drivers’ preferences and habits
can be designed and made available, for example
leveraging the computerised infrastructures of Smart
Cars (ENISA, 2019), Smart Roads (Pompigna and
Mauro, 2022) and Smart Cities (Toh and Martinez,
2020). Following a consolidated business model,
such services can be delivered virtually for free to
drivers, namely at the sole price of enabling the ser-
vice provider to act as a Data Controller or a Data
Processor on behalf of each individual driver. In sim-
pler terms, the price is to allow the service provider
to treat the drivers’ data according to the conditions
specified in the driver’s consent. Therefore, it is clear
that privacy threats affect citizens also when they gen-
erate personal data by driving modern cars.
The General Data Protection Regulation (GDPR)
is the European answer to the privacy needs of its citi-
zens, and is proving inspirational for other similar, in-
ternational regulations. It warns about a personal data
breach, the “accidental or unlawful destruction, loss,
alteration, unauthorised disclosure of, or access to,
personal data transmitted, stored or otherwise pro-
cessed”, which, therefore, may affect all scenarios in
which personal data, namely “any information relat-
a
https://orcid.org/0000-0002-7045-0213
b
https://orcid.org/0000-0002-7615-8643
ing to an identified or identifiable natural person”,
are processed, which implies any form of “collection,
recording, organisation, structuring, storage, adap-
tation or alteration, retrieval, consultation, use, dis-
closure by transmission, dissemination or otherwise
making available, alignment or combination, restric-
tion, erasure or destruction” (GDPR, 2016).
The mentioned privacy issues in the automo-
tive domain are perhaps insufficiently understood at
present, but are certain to demand GDPR compliance.
Compliance is meaningfully assessed in terms of pri-
vacy risk assessment, which in turn demands privacy
threat modelling, hence the general motivation for this
paper. Following the GDPR extracts given above, a
personal data breach represents the essential and most
abstract version of a threat to citizens’ privacy. This
is only an archetypal threat modelling exercise, while
threat modelling is an established and challenging re-
search area.
Research Questions and Contributions. Threat
modelling is challenging. The analyst faces a version
of a soundness and completeness problem. Soundness
may be interpreted at least as a sufficiently disambigu-
ous and detailed description of each threat. However,
completeness may be more impactful because failing
to account for specific threats would cause pitfalls to
the subsequent risk assessment. Completeness is also
very challenging because the analyst needs to decide
394
Raciti, M. and Bella, G.
How to Model Privacy Threats in the Automotive Domain.
DOI: 10.5220/0011998800003479
In Proceedings of the 9th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2023), pages 394-401
ISBN: 978-989-758-652-1; ISSN: 2184-495X
Copyright
c
2023 by SCITEPRESS Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
whether an extra threat is to be added to the current
list, and this, in turn, raises two further problems.
One is that the threat to be potentially added needs
to be discovered. The other one is the scrutiny on
whether that threat may be redundant with the current
list, given that redundancy may lead to inconsisten-
cies in the risk assessment. Even after appropriate de-
cisions in this circumstance, the completeness prob-
lem reiterates. Therefore, the general research ques-
tion that this paper sets for itself corresponds to its
title: How to model privacy threats in the automotive
domain?
To go about such a question, we observe that a
widely established privacy threat modelling method-
ology exists, LINDDUN (Deng et al., 2011). So, an
answer could be found, potentially, in such a method-
ology. However, LINDDUN is meant to be domain-
independent, a feature that is bound to keep its threat
descriptions only at an abstract level of detail. For ex-
ample, threat L
ds4
stands for “Excessive data avail-
able”. A specific level of detail is not prescribed and
depends on the analyst’s knowledge and experience
as applied to the specific exercise. However, we ques-
tion whether LINDDUN is enough for the automotive
domain, irrespectively of its level of detail, hence we
set a specific research question:
SRQ1. Can LINDDUN be considered com-
plete, albeit abstract, when applied to auto-
motive privacy?
Continuing our argument, it may be observed that the
analyst may want a rather detailed model of threats
for the automotive domain, namely a list of threats
that specifically revolve around the typical entities in-
volved in modern cars, such as threat “CAN eaves-
dropping”. This means that another specific research
question arises:
SRQ2. Can we model detailed and specific
privacy threats for the automotive domain that
can be considered complete with respect to
best practices from the state of the art?
It is clear that answering the two specific research
questions would answer the general research ques-
tion. In other words, SRQ1 concerns whether LIND-
DUN suffices when threat modelling can be rather
abstract, while SRQ2 concerns the same problem
though at a detailed level, where no de facto standard
methodology and only a few notable approaches ex-
ist, to the best of our knowledge.
This paper answers both the specific research
questions by advancing a systematic method to com-
pleteness in threat modelling. Our method leverages
LINDDUN, as it can be expected, and selects 4 rele-
vant sources that are considered best practices in the
state of the art:
1. “Good practices for security of smart
cars” (ENISA, 2019),
2. “Privacy threat analysis for connected and au-
tonomous vehicles” (Chah et al., 2022),
3. A double assessment of privacy risks aboard top-
selling cars” (Bella et al., 2023).
4. “Calculation of the complete Privacy Risks list
v2.0” (OWASP, 2021),
It can be seen that the final source pertains to web
privacy, which can be justified on the basis of the tight
interrelations with the automotive domain.
Our systematic method counts 5 steps. Its gist is to
derive a list of preliminary threats from the 4 sources
just itemised. Precisely, the preliminary threats are
found to be 95. Then, these are polished accord-
ing to various operations that we introduce below, to
produce the final threats. These amount precisely to
56 threats. We shall see that such final threats an-
swer SRQ2. However, if we continue by mapping
the final threats into LINDDUN threats, we find out
that 8 threats could not be mapped, thereby conclud-
ing how LINDDUN could be potentially expanded,
and effectively answering also SRQ1. It is notewor-
thy that, although our independent modelling stems
from the specific automotive application domain, the
8 threats that were left outstanding with respect to
LINDDUN are general privacy threats, namely they
ignore domain specific entities. This signifies that all
domain specific threats could be mapped to more gen-
eral LINDDUN threats.
2 LINDDUN AND RELATED
WORK
LINDDUN is a privacy threat modelling methodol-
ogy, inspired by STRIDE, that supports analysts in
the systematical elicitation and mitigation of privacy
threats in software architectures. LINDDUN pri-
vacy knowledge support represents one of the main
strength of this methodology, and it is structured ac-
cording to the 7 privacy threat categories encapsulated
within LINDDUN’s acronym (Deng et al., 2011),
namely Linkability, Identifiability, Non-repudiation,
Detectability, Disclosure of information, Unaware-
ness, Non-compliance.
Landuyt et al. (Van Landuyt and Joosen, 2020)
highlighted the influence of assumptions to the out-
comes of the analysis during the risk assessment pro-
cess, more precisely in the threat modelling phase in
the context of a LINDDUN privacy threat elicitation.
Vasenev et al. (Vasenev. et al., 2019) were among the
How to Model Privacy Threats in the Automotive Domain
395
first to apply an extended version of STRIDE (Mi-
crosoft, 2009) and LINDDUN (Deng et al., 2011)
to conduct a threat analysis on security and privacy
threats in the automotive domain. In particular, the
case study is specific to long term support scenar-
ios for over-the-air updates. Chah et al. (Chah et al.,
2022) applied the LINDDUN methodology to elicit
and analyse privacy requirements of CAV system,
while respecting the privacy properties set by the
GDPR. Such attempt represents a solid baseline for
the broader process of privacy risk assessment tai-
lored for the automotive domain.
3 OUR SYSTEMATIC METHOD
Our method is systematic but not yet fully formalised.
It means that the largest part of its steps and oper-
ations are still only informally defined, as already
noted. This will become apparent below. We shall see
that our findings are remarkable despite the currently
mostly informal approach.
The pivotal notion that we rely upon is threat em-
bracing. It wants to capture the standard scrutiny that
the analyst operates in front of a list of threats to un-
derstand the extent of their semantic similarity. One
element of scrutiny here derives from the possible use
of synonyms, for example a threat might mention the
word “protocol” and a similar threat may just rewrite
the first one by replacing that word with “distributed
algorithm”. Arguably, the analyst would conclude
that these threats are embraceable and embrace them
by selecting the one with the wording that they find
most appropriate, and discarding the other one.
Another element of scrutiny derives from the level
of detail of the statement describing a threat. For
example, “Unchanged default password” is certain to
be more detailed than (the more abstract) threat “Hu-
man error”. The analyst will typically conclude also
in this case that these two threats are embraceable and
proceed to embrace them by selecting the one whose
level of detail they find most appropriate for the spe-
cific threat modelling exercise they are doing. Nor-
mally, the analyst strives to choose a consistent level
of detail till the end of the exercise.
The five steps of our systematic method, sup-
ported by a running example, are detailed below.
3.1 Step 1 Threat Collection
The first step involves the collection of the threats
that the analyst deems relevant, namely arising from
relevant sources. These may vary from case to
case, and the analyst normally appeals to reliable
academic publications, international standards, best-
practice documents and other authoritative material
from governmental bodies, research institutions and
so on. In this paper, we selected the 4 relevant sources
outlined above. There is no specific limit to the num-
ber of threats to be collected, and these may reach the
order of hundreds, of course, depending once more on
the application domain. There is also, in this step, no
limit to the quality of threats that are collected, hence
these are likely to bear redundancy and clear semantic
similarity. These issues will be faced in the following
steps.
In slightly more formal terms, this step is to build
a list P of preliminary threats and assign an identifier
to each threat so that:
P = p
1
, . . . , p
n
.
It is useful to organise the threats in a table, say P , fol-
lowing the vertical dimension. Table P will grow with
more and more columns as our systematic method
proceeds. If C
k
is the projected function that takes
a table and yields its k-th column, then:
C
1
(P ) = P.
The second column carries the threat description, or
label in brief, for each threat by means of function
label:
C
2
(P ) = LB,
where LB = label(p
1
), . . . , label(p
n
).
We may also formalise the source where each threat
derives from, which in this step corresponds to its ori-
gin document, by means of a function source. It is
useful to note it down in the third column:
C
3
(P ) = S,
where S = source(p
1
), . . . , source(p
n
).
A demonstration of this step on our running example
yields a table with three columns that is omitted here
but corresponds to the first three columns of Table 1.
Threat sources are only symbolically represented be-
cause the example is a mock-up.
3.2 Step 2 Categorisation
The second step extends P by categorising each pre-
liminary threat in P with respect to the LINDDUN
properties. In particular, we add a column to P for
each of the seven properties, then we tick a cell if
the given threat relates to that property. Obviously,
a threat may apply to multiple LINDDUN properties.
Such operations may be demanding because each de-
pends on both the level of detail of the given threat and
on the knowledge that is available on the target system
VEHITS 2023 - 9th International Conference on Vehicle Technology and Intelligent Transport Systems
396
and the threat scenarios. In particular, the categorisa-
tion step is prone to the analyst’s bias subjectivity, as
well as human errors. All this is further discussed be-
low.
More formally, let us introduce boolean functions
following the LINDDUN acronym isL, isI, isN, isD,
isDi, isU and isNc, which take a threat and hold when
that threat applies to the respective property. For ex-
ample:
isN(t) =
if the analyst decides that threat
t affects property N,
otherwise
In practice, the symbol is often omitted for read-
ability, leaving an empty cell. Therefore, P grows as
follows:
C
4
(P ) = L, where L = isL(p
1
), . . . , isL(p
n
),
C
5
(P ) = I, where I = isI(p
1
), . . . , isI(p
n
),
C
6
(P ) = N, where N = isN(p
1
), . . . , isN(p
n
),
C
7
(P ) = D, where D = isD(p
1
), . . . , isD(p
n
),
C
8
(P ) = Di, where Di = isDi(p
1
), . . . , isDi(p
n
),
C
9
(P ) = U , where U = isU(p
1
), . . . , isU(p
n
),
C
10
(P ) = Nc, where Nc = isNc(p
1
), . . . , isNc(p
n
).
Table 1 shows the threats put together through the
previous step now enriched with appropriate ticks to
highlight the concerned LINDDUN properties. In
particular, it can be noticed that all threats concern
linkability.
3.3 Step 3 Manipulation
The third step shapes a new list F of threats that we
call final threats and store them in the first column of
a new table F . Formally:
F = f
1
, . . . , f
m
,
C
1
(F ) = F.
Columns from 4 through to 10 in F are defined as
with P but over F rather than over P. We need to
specify how to fill the new table up. The underlying
concept is to build this table to solve the redundancies
arisen in the old table. In fact, the use of different
sources inherently leads to different levels of detail
and various overlaps, beside the fact that some entries
could refer to the same threat scenario. To address
these issues, we define a list of operations to build the
final threats upon the basis of the preliminary threats.
The range of the various indexes is self-evident and
omitted here for brevity.
The first operation applies to a list of generic
threats t
1
, . . . , t
s
(namely, they could be either prelim-
inary or final) which are considered embraceable:
o
1
. embrace(t
1
, . . . , t
s
).
The result of this operation is a threat that gets the
same id and label as the element of the input threat
with the most pertaining level of detail according to
the analyst. Otherwise, if all preliminary threats that
are considered bear a similar level of detail, then the
final threat gets the same label as the first element
of the list. The LINDDUN properties corresponding
to the computed threat are the union of all properties
that were ticked for each of the input threats t
1
, . . . , t
s
.
In general, threats can be embraced together multiple
times, both at preliminary and at final levels. This
operation is useful to build F , hence to build a final
threat from given preliminary threats, namely:
f
l
:= embrace(p
i
, . . . , p
j
).
The second operation renames a threat:
o
2
. rename(t
q
).
The analyst may judge the default label as incomplete
and feel the need to modify the level of detail of the
threat label, while the ticked LINDDUN properties
remain unvaried. This is specifically useful, for ex-
ample, when we want to assign a proper label to a
threat in F produced by an embrace of threats:
f := rename( f ).
The last operation discards a threat, meaning that the
considered threat is excluded from the current table
(and possibly moved to a reserve list for future re-
inspection):
o
3
. discard(t).
This is necessary when a threat is inapplicable to the
domain, e.g., it strictly refers to security rather than
privacy or is considered irrelevant for the particular
target system. An example application is to a pre-
liminary threat, which is therefore not going to be re-
ported in F . This is the only operation that can be
defined formally here. If index 0 denotes an empty
threat, we have that:
discard(t) = t
0
.
It is crucial to apply the operations above with cau-
tion to avoid the loss of relevant information and
maintain the semantic of the threats unvaried. More-
over, operations can be nested. For example, given
a list of threats referring to the same threat scenario,
it may happen that none of them embraces the oth-
ers, thus the resulting label of an embrace opera-
tion would be inappropriate. This issue can be ad-
dressed by nesting the first two operations as follows:
rename(embrace(p
i
, . . . , p
j
)).
On our running example, we observe that p
1
and
p
2
are embraceable, hence we apply the embrace op-
eration. We consider p
2
more general than p
1
, hence
we set:
How to Model Privacy Threats in the Automotive Domain
397
Table 1: Outcomes of Step 1 and Step 2 on our running example.
P LB S L I N D Di U Nc
p
1
Insufficient randomness of session ID source(p
1
)
p
2
Session control mechanisms may be hijacked source(p
2
)
p
3
Browser is not updated source(p
3
)
embrace(p
1
, p
2
) = p
2
,
f
1
:= p
2
.
We then rename the outcome by operation rename:
rename(p
2
) =
“Weak web session control mechanisms”.
Finally, we observe that preliminary threat p
3
is a ver-
ifiable event, namely its likelihood would be null or
top in a given scale. We decide that this event be-
longs more correctly to the list of security measures
that can be verified by controls, rather than to a threat
list. Therefore, we apply discard(p
3
). The final out-
come of this list of operations is shown in Table 2.
Having reached the end of this step, table F of
final threats can be leveraged to answer SRQ2, as we
shall see in Section 4.
3.4 Step 4 Mapping
The fourth step consists in verifying whether the
threat catalogue proposed by the LINDDUN frame-
work covers the threats in F and vice versa. This can
be done by appropriate applications of the embrace
operations, as detailed here.
For each final threat f and each of the properties
that are ticked in F for it, we study the correspond-
ing LINDDUN property tree to distil out all nodes
that are embraceable with f and then apply the oper-
ation. The analyst should proceed carefully to make
sure that every embrace operation yields a LINDDUN
threat because this is useful to address the research
questions stated above. By contrast, such a require-
ment through the application of the operation could be
removed should the analyst have a different aim, for
example of modelling what they find the best threats
according to their own knowledge and experience.
By proceeding systematically, a new table M can
be built, representing all mapped threats, namely all
LINDDUN threats that could embrace a final threat.
Also, this table has the usual structure, but columns
numbered 4 through to 10 can be omitted because
only LINDDUN threats are represented and their
identifier suggests the overarching property. For ex-
ample, the LINDDUN nodes in the Linkability prop-
erty tree that we deem embraceable with f
1
are:
L
d f 1
=
“Linkability of transactional data
(transmitted data)”,
L
d f 4
=
“Non-anonymous communication are linked”,
L
d f 10
= “Based on session ID”.
Therefore, we calculate:
embrace( f
1
, L
d f 1
, L
d f 4
, L
d f 10
) = L
d f 10
and assign:
m
1
:= L
d f 10
The operation yields L
d f 10
, which, according
to the LINDDUN notation, must be read as “Non-
anonymous communication are linked based on ses-
sion ID”. This particular embrace is coherent with the
aim to answer SRQ1, as we shall see in Section 4.
Of course, if other LINDDUN properties were ticked,
the list of threats to embrace should include the addi-
tional LINDDUN threats that would be embraceable
with f
1
, taken from the corresponding property trees.
It may now be the case that the analyst does not
feel like mapping some final threats to any of the
LINDDUN ones. It means that the analyst feels that
no LINDDUN trace is embraceable with those final
threats. When this is the case, our systematic method
would highlight a limitation of LINDDUN in terms
of coverage. By taking typical domain details off the
final threats that could not be mapped, we get a list of
valid candidates to become new nodes in the pertain-
ing threat tree(s) of an amended LINDDUN method-
ology.
Finally, it is noteworthy that this step implicitly
provides the opportunity to adjust potential errors
from Step 2 as it provides a more granular view thanks
to the significant amount of nodes to examine.
3.5 Step 5 Safety Check
The last step implements a further safety check of
Step 2, when we may have assigned an insufficient list
of pertaining properties to the preliminary threats that
were later embraced in some final threat. To thwart
that, this step prescribes, for each final threat f , the
analyst to assess all LINDDUN property trees as it
was done in Step 4 for the pertaining properties only.
The clear aim is to find any LINDDUN threat at all
that would be embraceable with f .
Furthermore, this step is relevant because the as-
signment of properties to threat was only done with
VEHITS 2023 - 9th International Conference on Vehicle Technology and Intelligent Transport Systems
398
Table 2: Outcomes of Step 3 on our running example.
F LB S L I N D Di U Nc
f
1
Weak web session control mechanisms rename(embrace(p
1
, p
2
))
preliminary threats. The final threats may include,
for example after the analyst’s renaming operation, a
level of detail that may highlight some link with the
LINDDUN properties. Therefore, this step is crucial
to also minimise the odds of erroneous exclusions,
which would lead the analyst to conclude that certain
final threats could not be mapped into LINDDUN.
For example, following in-depth scrutiny, we may
now observe that f
1
also concerns Identifiability and
Disclosure of information properties, due to threats:
I
d f 1
=
“Identifiability of transactional data
(transmitted data)”,
I
d f 6
=
“Non-anonymous communication traced to entity”,
I
ds2
=
“Non-anonymous communication are linked”,
I
d f 10
= “Based on session ID”.
Therefore, we update m
1
by means of a further em-
brace operation that is larger than the previous one:
m
1
:= embrace( f
1
, I
d f 1
, I
d f 4
, I
d f 10
I
d f 1
, I
d f 6
, I
ds2
, I
d f 10
).
This means that the analyst gains an additional op-
portunity to decide how to best represent f
1
within
LINDDUN.
4 DEMONSTRATION OF OUR
METHOD
We apply our systematic method described above to
address the specific research questions. The full out-
comes, including the 95 preliminary and the 56 de-
tailed, final privacy threats for the automotive, are
released on a GitHub repository (Raciti and Bella,
2023). In particular, the latter threats, built by taking
our systematic method up to Step 3, answer SRQ2.
We provide two distinct Excel files, reflecting the
three automotive sources and the web application
source separately. Each file contains sheets named
according to the same terminology introduced in Sec-
tion 3, namely results from Step 1 are included in the
sheet ”Step 1”, and so on.
4.1 The 3 Sources from Automotive
The following paragraphs reflect the steps of our sys-
tematic method and feature a few notable examples
for the sake of brevity.
Collection. We selected three sources of threats that
pertain to the automotive domain. To do so, we
appealed to a best-practice document, namely Good
practices for security of Smart Cars report (ENISA,
2019), and two recent and reliable academic publi-
cations (Chah et al., 2022) (Bella et al., 2023). The
report proposed by ENISA provides a list of rele-
vant threats and risks with a focus on “cybersecu-
rity for safety”. The second contribute (Chah et al.,
2022) provides an extract of some vulnerabilities and
privacy-related attack scenarios onboard and outboard
connected and autonomous vehicles (CAVs). Further-
more, the third and last source (Bella et al., 2023) fea-
tures a list of privacy threats targeting the automotive
domain. Following our systematic method, we col-
lected a total of 75 preliminary threats, distributed as
Table 3 shows.
Table 3: Distribution of preliminary threats over the 3 auto-
motive privacy sources.
Source Number of threats
ENISA 30
Chah et al. 20
Bella et al. 25
Total 75
Categorisation. Successively, we applied a cate-
gorisation of the 75 threats. In particular, the first
source (ENISA) provides a threat taxonomy that in-
cludes descriptions of the threats. Therefore, we
leveraged such descriptions to better identify the
LINDDUN properties affected by those threats. Fur-
thermore, the second source (Chah et al., 2022) of-
fers a view of the privacy threats along with the attack
scenarios, preconditions and the LINDDUN proper-
ties affected. We trust the work of the authors, thus
for each threat from this source we crossed the very
same LINDDUN properties. The third source (Bella
et al., 2023) derives the list of privacy threats from a
STRIDE threat modelling and justifies them in prose,
thereby we leveraged such descriptions to identify,
once more, the affected LINDDUN properties.
Manipulation. At this point, we expected the three
different sources to provide threat labels with vari-
ous levels of detail and different terminology. There-
fore, we adopted all the three operations discussed
How to Model Privacy Threats in the Automotive Domain
399
above for this step to slim down the list of prelimi-
nary threats. For the sake of simplicity, Table 4 shows
an extract of the final threats as a derivation process
for three illustrative threats, namely f
21
“Infotainment
alteration”, f
35
“Unauthorised access in OEM and/or
car services” and p
30
“Car depleted battery”.
In detail, f
21
“Infotainment alteration” is derived
by embrace over the following threats: p
73
“Infotain-
ment alteration”, p
3
“Manipulation of hardware and
software”, p
37
An adversary can execute arbitrary
code on the telematics unit (TCU) and take control of
the device.”, p
43
Attacker operates physically on the
TC by tampering the device firmware. and p
44
An
attacker could perform remote control by installing
remotely his own software on the device.
By contrast, f
35
“Unauthorised access in OEM
and/or car services” is obtained via a combination of
operations rename and embrace of the threats: p
63
“Unauthorised diagnostic access”, P
11
“Unauthorised
activities”. Finally, p
30
“Car depleted battery” is dis-
carded thanks by discard, since we deemed it irrele-
vant as a privacy threat.
Table 4: Example of threat finalisation.
F S
f
21
embrace(p
73
, p
3
, p
37
, p
43
, p
44
)
f
35
rename(embrace(p
11
, p
63
))
f
0
discard(p
30
)
We end up with 41 final threats, as Table 5 illus-
trates along with some statistics on the number of op-
erations applied.
Table 5: Operations applied in the automotive domain.
Step 2 Step 3
Total Total Embrace Rename Discard
75 41 26 4 3
Mapping. For the sake of simplicity, we present an
extract of the mapping with respect to the threat tree
for a LINDDUN property, precisely the Detectability
property. The leading final threats here are f
7
“Com-
munication protocol hijacking in car devices”, f
32
“Software vulnerabilities exploitation in OEM and/or
car services and f
35
“Unauthorised access in OEM
and/or car services”. In particular, we performed the
following operations:
embrace( f
32
, f
35
, D
ds1
) = D
ds1
,
embrace( f
32
, D
ds2
) = D
ds2
,
embrace( f
7
, D
ds3
) = D
ds3
.
We could not match the following threats with any
LINDDUN node: f
13
“Failure to meet contractual re-
quirements with driver and f
41
“Violation of rules
and regulations/Breach of legislation/ Abuse of driver
personal data”.
Safety Check. Finally, we iterated over all the
nodes of the trees, independently of the LINDDUN
property, but did not find any additional LINDDUN
threats to which f
13
and f
41
could be reasonably
mapped.
4.2 The Source from the Web Domain
This Section discusses specificities arisen from the
OWASP list of threats for web privacy, which we find
relevant for the wider automotive domain.
Collection. We considered a general list of privacy
threats targeting web applications, namely OWASP
Top 10 Privacy Risks(OWASP, 2021), as the unique
source during collection due to its relevance in the
chosen domain. In particular, we employed the “Cal-
culation of the complete Privacy Risks list v2.0”,
which includes a total of 20 threats forming the pre-
liminary threats according to our systematic method.
Manipulation. Successively, we realised that some
operations were needed to overcome redundancy,
hence we went through various applications of
embrace and rename but never used discard. The pre-
liminary threats reduced to a total of 15 final threats,
and some relevant statistics are in Table 6. As a result,
we could not match the following threats with any
LINDDUN nodes: f
2
“Consent-related issues with
driver”, f
4
“Inability of driver to access and mod-
ify data”, f
7
“Insufficient data breach response from
OEM”, f
11
“Misleading content in OEM services”,
f
13
“Secondary use of driver data and f
14
“Shar-
ing, transfer or processing through 3rd party of driver
data”.
Table 6: Operations applied in the web domain.
Step 2 Step 3
Total Total Embrace Rename Discard
20 15 3 4 0
4.3 Findings and Conclusions
The full final threats are available online (Raciti and
Bella, 2023). The application of our systematic
method highlighted that there are final threats that are
not embraceable with any LINDDUN node according
to the analyst’s judgement, and these are summarised
in Table 7. Note that the table relies on a suffix to
the indexes of the threats to avoid ambiguity, namely
VEHITS 2023 - 9th International Conference on Vehicle Technology and Intelligent Transport Systems
400
Table 7: Final threats from the automotive and web domains that we could not match to any LINDDUN threat.
F LB S
f
13a
Failure to meet contractual requirements with driver p
27a
f
41a
Violation of rules and regulations/Breach of legislation/
Abuse of driver personal data
p
28a
f
2w
Consent-related issues with driver rename(embrace(p
4w
, p
17w
))
f
4w
Inability of driver to access and modify data p
9w
f
7w
Insufficient data breach response from OEM p
3w
f
11w
Misleading content in OEM services p
16w
f
13w
Secondary use of driver data p
19w
f
14w
Sharing, transfer or processing through 3rd party of driver data rename(embrace(p
12w
, p
15w
))
threats from Section 4.1 are referred to as f
i
a, whilst
those from Section 4.2 are indicated as f
i
w, to distin-
guish the a(utomotive) domain from the w(eb appli-
cation) one. By taking off the phrases in italics, we
get a list of threats that are general enough to become
valid candidates as new nodes in the pertaining threat
tree(s) of an amended LINDDUN methodology. This
answers SRQ1, which required the execution of our
systematic method up to its final step.
This paper faced the challenge of threat modelling
in the automotive domain in two ways. It questioned
whether LINDDUN could suffice as an abstract-level
methodology, concluding that it may have to be ex-
tended with 8 new threats, thereby effectively answer-
ing SRQ1. It questioned how to build a list of de-
tailed threats in the same domain ensuring that the
list is complete with respect to chosen relevant best
practices, concluding with a list of 56 detailed, final
threats, thereby effectively answering SRQ2.
The paper has remarked consistently that its find-
ings are biased by the authors’ subjectivity. However,
all identified threats remain valid candidates for the
international community’s evaluation. While it seems
a stretch to imagine that the analyst’s role may be
emptied entirely, our future research looks at modern,
intelligent techniques from the area of Natural Lan-
guage Processing to improve the formalisation of the
various operations made through the steps of our sys-
tematic method. In particular, the upcoming steps in-
volve the application of Semantic Similarity to score
the relationship between threats based on their seman-
tic, hence ultimately reducing subjectivity.
REFERENCES
Bella, G., Biondi, P., and Tudisco, G. (2023). A double as-
sessment of privacy risks aboard top-selling cars. Au-
tomotive Innovation.
Chah, B., Lombard, A., Bkakria, A., Yaich, R., Abbas-
Turki, A., and Galland, S. (2022). Privacy threat anal-
ysis for connected and autonomous vehicles. Pro-
cedia Computer Science, 210:36–44. The 13th In-
ternational Conference on Emerging Ubiquitous Sys-
tems and Pervasive Networks (EUSPN) / The 12th In-
ternational Conference on Current and Future Trends
of Information and Communication Technologies in
Healthcare (ICTH-2022) / Affiliated Workshops.
Deng, M., Wuyts, K., Scandariato, R., Preneel, B., and
Joosen, W. (2011). A privacy threat analysis frame-
work: supporting the elicitation and fulfillment of
privacy requirements. Requirements Engineering,
16(1):3–32.
ENISA (2019). Good Practices for Security of Smart Cars.
https://www.enisa.europa.eu/publications/smart-cars.
GDPR (2016). Regulation (EU) 2016/679 General Data
Protection Regulation. https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=celex%3A32016R0679.
Microsoft (2009). The stride threat model.
OWASP (2021). Top 10 Privacy Risks.
https://owasp.org/www-project-top-10-privacy-risks/.
Pompigna, A. and Mauro, R. (2022). Smart roads: A state
of the art of highways innovations in the smart age.
Engineering Science and Technology, an International
Journal, 25:100986.
Raciti, M. and Bella, G. (2023). Github
repository with complete outcomes.
https://github.com/tsumarios/LINDDUN-threats-
completeness.
Toh, Sanguesa, C. and Martinez (2020). Advances in smart
roads for future smart cities. In Proceedings of the
Royal Society A: Mathematical, Physical and Engi-
neering Sciences, volume 476, 2233.
Van Landuyt, D. and Joosen, W. (2020). A descriptive
study of assumptions made in linddun privacy threat
elicitation. In Proceedings of the 35th Annual ACM
Symposium on Applied Computing, SAC ’20, page
1280–1287, New York, NY, USA. Association for
Computing Machinery.
Vasenev., A., Stahl., F., Hamazaryan., H., Ma., Z., Shan., L.,
Kemmerich., J., and Loiseaux., C. (2019). Practical
security and privacy threat analysis in the automotive
domain: Long term support scenario for over-the-air
updates. In Proceedings of the 5th International Con-
ference on Vehicle Technology and Intelligent Trans-
port Systems - VEHITS,, pages 550–555. INSTICC,
SciTePress.
How to Model Privacy Threats in the Automotive Domain
401