controlled by a different party. Such approach is taken
by some real-world secure MPC solutions, such as
Sharemind (Bogdanov et al., 2008) used in this work.
In MPC, assumptions are typically placed on the
participating parties (agents in our case) and their com-
munication and computation capabilities. The assump-
tions we consider in ths work are the following: (i)
There is no trusted third-party. (ii) The planning agents
are semi-honest (or honest but curious). (iii) The
computation power of the agents is either unbounded
(information-theoretic security), or polynomial-time
bounded (computational security).
The assumption of semi-honest agents means, as
opposed to malicious agents, that every agent follows
the rules of the computation protocol based on its in-
put data, but after the computation is finished, it is
allowed to use any information it has received during
the protocol to compromise the privacy. The com-
putation power of the agents (which can be used to
infer additional knowledge from the executed proto-
col) is typically seen either as unbounded, in which
case we are talking about information-theoretic secu-
rity, or polynomial-time bounded, which is the case of
computational security.
When applied to PP-MAP, the notion of
polynomial-time bounded adversary may seem some-
what less suitable, as the planning itself is not polyno-
mial (but PSPACE-complete (Bylander, 1994)). Nev-
ertheless, the computation power of the agents is still
polynomial, thus allowing to solve either polynomial
instances or small instances. For such instances of
planning problems which can be practically solved, the
cryptographic assumptions (such as that the factoring
of large integers is hard), for which the polynomial-
time bound is typically used are still valid.
There are basically two approaches to multi-agent
planning based on the MPC techniques. The first ap-
proach is to encode planning in some general MPC
technique such as cryptographic circuits (Yao, 1986).
For example, the cryptographic circuits encode the
whole computation of a function into a boolean or al-
gebraic circuit, which can be then securely evaluated
using some of the existing secure protocols. The prob-
lem related to MAP is, that the worst-case scenario
has to be encoded, that is, the complete exploration of
the search space, which itself is exponential in the size
of the MAP input (e.g., MA-STRIPS). Therefore, it is
not clear how exactly PP-MAP would be encoded in
such general model, whether it is even feasible, and
what the overhead of such encoding would be.
The second approach is to devise a specific PP-
MAP algorithm based on MPC primitives, such as
private set intersection (Li and Wu, 2007). There is
a number of solutions for a related problem, shortest
path in a graph (Brickell and Shmatikov, 2005). Such
techniques solve the shortest path problem for an ex-
plicit graph, typically represented by a matrix. In clas-
sical planning and subsequently in MAP, the explicit
graph (that is, the transition system) is exponential in
the problem size, which, for practical problem sizes,
makes it impossible to use such explicit (e.g., matrix)
representation. So far, the only published (theoreti-
cal) MAP planner using MPC primitives is (To
ˇ
zi
ˇ
cka
et al., 2017) which uses the secure set intersection as
part of the planning process. In this work we imple-
ment the theoretical concept by implementing the set
intersection using the Sharemind framework.
3.2 Weak and Strong Privacy
In the MAP and PP-MAP literature, the concept of
privacy has been mostly reduced to the idea of weak
privacy, as stated in (Nissim and Brafman, 2014). Here,
we rephrase the (informal) definition:
We say that an algorithm is weak privacy-
preserving if, during the whole run of the al-
gorithm, the agent does not communicate (un-
encrypted) private parts of the states, private
actions and private parts of the public actions.
In other words, the agent openly communicates
only the information in M
= {Π
i
}
n
i=1
.
Obviously, the weak privacy does not give any guar-
antees on privacy whatsoever, as the adversary may
deduce private knowledge from the communicated
public information. Nevertheless, not all weak privacy-
preserving algorithms are equal in the amount of pri-
vacy leaked as shown in (
ˇ
Stolba et al., 2019).
In (Nissim and Brafman, 2014), the authors define
also strong privacy, which is in accordance with the
cryptographic and secure MPC model. Here we infor-
mally rephrase the definition of Nissim&Brafman:
A strong privacy-preserving algorithm is such
a distributed algorithm that no agent
α
i
can
deduce an isomorphic (that is differing only in
renaming) model of a private variable, a private
operator and its cost, or private precondition
and effect of a public operator belonging to
some other agent
α
j
, beyond what can be de-
duced from the public input (M
= {Π
i
}
n
i=1
)
and the public output (projection of the solu-
tion plan π
B
).
A more precise formal definition can be stated based
on the definition of privacy in secure MPC, where
privacy is typically defined with respect to the ideal
world in which a trusted third party exists.
Definition 1.
(Strong privacy MPC) Let
p
1
, ..., p
n
be
n
parties computing an algorithm
P
which takes
n
pri-
ICAART 2020 - 12th International Conference on Agents and Artificial Intelligence
854