pare these attacks with ours (Phan, 2004). The fig-
ures
1
are given on Table 1. Previous attacks are dis-
cussed in this paper.
Our Contribution. In this paper, we present a new
attack on triple encryption which is based on the dis-
covery of fixed points for the mapping
x 7→ Enc
K
◦ Enc
−1
ϕ(K)
for some relation ϕ. This discovery requires the en-
tire code book in a Broadcast Known Plaintext (BKP)
attack for Enc
K
and Enc
ϕ(K)
which makes our data
complexity high. In the BKP model, the adversary
obtains a random plaintext and its encryption under
different keys. Once we have a (good) fixed point,
our attack becomes similar to a standard meet-in-the-
middle attack. So, it has a pretty low complexity. Fi-
nally, we show that our attack compares well to the
best ones so far. In the 2-key case, it becomes the best
known-plaintext attack.
2 COMPARING RELATED-KEY
ATTACKS
Given a dedicated attack against a cipher, it is tempt-
ing to compare it with exhaustive search and declare
the cipher broken if the attack is more efficient. This
is however a bit unfair because the attack model may
already have better generic attacks than exhaustive
search.
As an example, Biham’s generic attack (Biham,
1996) applies standard time-memory tradeoffs in the
related key model. His attack consists of collect-
ing y
i
= Enc
K
i
(x) for a fixed x and r related keys.
That is, we use r chosen plaintexts. Then, it builds
a dictionary (y
i
,i) and run a multi-target key recov-
ery to find one K such that Enc
K
(x) is in the dictio-
nary. With t attempts, the probability of success is
p = 1 − (1− r2
−ℓ
)
t
≈ 1− e
−rt2
−ℓ
. The dictionary has
size m = r(ℓ + logr) bits. For simplicity, we approx-
imate m ≈ r. In particular, for t = r = 2
ℓ/2
, we have
p ≈ 1 − e
−1
≈ 63%, so this is much cheaper than ex-
haustive search.
The complexity of a related-key attack can be
characterized by a multi-dimensional vector consist-
ing of
1
with sightly different units: our time complexities are
measured in terms of triple encryption instead of single en-
cryption; our memory complexities are measured in bits in-
stead of 32-bit words; our number of keys include the target
one and not only the related ones
• the number of related keys r (the number of keys
which are involved is r, i.e. r = 1 when the attack
uses no related keys);
• the data complexity d (e.g. the number of
chosen plaintexts), where we may distinguish
known plaintexts (KP), broadcast known plain-
texts (BKP), chosen plaintexts (CP), and chosen
ciphertexts (CC) as the may be subject to differ-
ent costs in the attack model;
• the time complexity of the adversary t, where we
may distinguish the precomputation complexity
and the online running time complexity;
• the memory complexity m, which may further
distinguish quick-access or slow-access memory,
read/write memory or read-only memory;
• the probability of success p.
There are many other possible refinements.
We can compare attacks by using the partial order-
ing p on vectors (r,d,t,m,
1
p
), i.e.
(r,d,t, m, p) ≤
p
(r
′
,d
′
,t
′
,m
′
, p
′
)
m
r ≤ r
′
and d ≤ d
′
and t ≤ t
′
and m ≤ m
′
and p ≥ p
′
When a category such as the data complexity d has
a sub-characterization (d
KP
,d
BKP
,d
CP
,d
CC
), the d ≤
d
′
implies an other partial ordering on these sub-
characteristics. We can say that an attack is insignifi-
cant if there is a generic attack with a lower complex-
ity vector. It is not always possible to compare two
multi-dimensional vectors. So, it is not clear whether
an attack is significant when it is not insignificant. So,
it is quite common to extend the partial ordering ≤
p
using different models which are discussed below.
Conservative Model. Traditionally, t, m, and p are
combined into a “complexity” which is arbitrarily
measured by max
t
p
,m
. We could equivalently
adopt
t
p
+ m since these operations yield the same or-
ders of magnitude.
The idea behind this arbitrary notion is that we
can normalize the success probability p by using
1
p
sessions of the attack. So, t has a factor
1
p
corre-
sponding to
1
p
different sessions. Clearly, the running
time of every session adds up whereas their memory
complexity does not. If we make no special treatment
for r and d, we can just extend this simple notion by
adding them in the time complexity t (since the ad-
versary must at least read the received data). We can
thus replace t by max(r,d,t). This leads us to
C
conservative
(r,d,t, m, p) = max
r
p
,
d
p
,
t
p
,m
SECRYPT 2011 - International Conference on Security and Cryptography
60