A VV is a one-hop directed adjacency between
a function f
1
and a port p of a function f
2
and can
be interpreted as a potential unitary step within an at-
tack. The attacker at f
1
must see p via at least one of
possible means, like a network route or a system call.
This is true for SIs, thanks to the filtering of SI can-
didates. As of FIs, any FI without the visibility of p
of the interaction’s consumer makes the whole model
ill-defined, because functional interactions, being part
of the system by design, must by principle be also be
physically realisable. All VVs are additionally dec-
orated with a further discussed attack position of the
attacker.
At most two VVs are derived from an interaction:
• a forward attack, from the interaction’s producer
to its consumer;
• a reverse attack, from the consumer the producer;
does not exist for IIs, as in their case, by defini-
tion, the attacker is already at the source; reverse
attack are often unviable, e.g. if the source port is
physically not receiving any information.
Actual viability of an attack via a VV depend on fur-
ther discussed attack costs which are specified as infi-
nite or non-existing if an attack is not viable (the latter
quality differs from the former only in disallowing the
presence of associated interactions which in turn im-
proves finding errors in model specification).
During the construction of a V, IIs are searched
for by exhaustively checking each FI for its possible
interceptions, the condition being the visibility of the
FI’s producer or consumer port by an intercepting at-
tacker. We do not repeat the process for SIs, as the
attacker never attacks a SI from the side, because if
such an interaction were possible, it should already
be introduced directly to a respective template as an-
other SI.
In order to differentiate attack costs more finely,
each VV has a class determining the attacker’s attack
position (AP). Let us define a physical communica-
tion path (PCP) as an (unordered) set of all termi-
nals which may potentially take part in the transport
of data of an FI, including the host terminals of two
concerned peer functions. Depending on the physi-
cal position of the attacker, we divide VVs into the
following classes:
• AP S (source): attacker is at one of the two end
points of the attacked interaction; thanks to this,
he may e.g. steal the security keys more easily
and use them on the other peer; all VVs derived
from FIs and SIs are of the class S, because the
other classes model only intercepting attacks;
• AP P (path): concerns only IIs; a man-in-the-
middle-scheme where the attacker function is in
a terminal belonging to PCP of a FI to be attacked
and the function is also a kernel (thus, a network-
managing function) and therefore it may listen to
and easily intercept the routed interaction;
• AP A (access): concerns only IIs; not of the class
P, but (by definition of a VV) the target (attacked)
port is seen; the attacker has a smaller number of
possibilities, it may though perform e.g. a DOS.
In case of an II, the attack difficulty always includes
both the intercepting attack on a respective FI and a
subsequent attack on a peer function of the FI.
A compromised kernel, due to its privileges, may
have several impersonation capabilities (here under-
stood as posing as another function) which may trick
other functions, in particular kernels which manage
the routing rules. We put a line here concerning the
impersonation limits which, while not perfect, seems
simple and reasonable to us: the impersonation in
question may work around routing rules in remote
terminals (like a network router), as a kernel may
set/alter a network packet’s origin data (MAC etc.) as
it wishes and routing hardware typically controls just
that. Conversely, the impersonation cannot give an
attacker a better AP, even if the compromised kernel
makes it seem that it is a peer of an FI or a router on
a PCP, because:
• an advantage of AP S is associated with confiden-
tial information in a VV’s peer of the function
to attack, and here we assume that it cannot be
stolen if the attacker has merely an AP of class P
or A (which does not always hold – consider e.g.
(Mavrogiannopoulos et al., 2012) where a func-
tion stores secure data in the kernel);
• similarly, an advantage of AP P stems from the ac-
tual physical position of the attacker, which can-
not thus be replaced by impersonation.
The line in question would not work if a peer
within a FI authenticated its other peer only via the
packet’s origin data. In that case, an impersonation
would help in gaining advantages of the AP S. We
assume, however, that it is an unlikely practice in a
critical system.
3.4 Compromised States
We augment the modelling with variable states of the
attacker. This will allow further fine-tuning of costs.
Namely, the attacker, during its propagation through
the system, puts functions into different compromised
states (CS):
• malware (M) where the attacker is a live code;
• bad data (B) where the attacker hides in a passive
blob of data;
ICISSP 2021 - 7th International Conference on Information Systems Security and Privacy
394