collaboration and competition, and from the view-
point of modeling group formation under the con-
straints of possible given goals. However, underhand
attacks by hidden coalitions pose security problems
that cannot be dealt with such traditional means. Nor
can they solved by a simple, monotonic, approach
based on Coalition Logic(s) such as (
˚
Agotnes et al.,
2008; Oravec and Fogel, 2006; Pauly, 2001; van der
Hoek and Wooldridge, 2005).
To illustrate all this further, consider the following
concrete example from an online social network such
as Facebook, where abuse, misuse or compromise of
an account can be reported to the system administra-
tion. In particular, a group of agents (in this case,
Facebook users) can report a fake profile:
You can report a profile that violates Face-
book’s Statement of Rights and Responsibilities
by clicking the “Report/Block this Person” link
in the bottom left column of the profile, select-
ing “Fake profile” as the reason, and adding the
appropriate information. [...] (Excerpt from
http://www.facebook.com/help/?search=fake)
The administrator of the system gives an ultimatum to
the agent that uses the reported profile and then may,
eventually, close it. An underhand coalition can ex-
ploit this report mechanism to attack an agent who
possesses a “lawful” original profile: at first they cre-
ate a fake profile with personal information and pho-
tos of the agent under attack, and then they become
friends of her. After that, they report the original pro-
file so that the administrator closes it. The report is
a lawful action, and by creating the new profile and
having a big enough number of agents who report the
same profile no suspicion about the hidden coalition
is raised, so that the attack succeeds.
Contributions. A formalism to define and reason
about such hidden coalitions is thus needed. Indeed,
Coalition Logic allows one to define coalitions that
are explicit (i.e. not hidden) and is characterized by
monotonic permissions to act in groups and individu-
ally. What is missing, however,is the notion of hidden
coalition and a method to block the underhand attacks
such coalitions carry out. The idea underlying our
approach is to circumscribe the problem in algebraic
terms, by defining a system that can be represented by
a coalition logic, and then activate a non-monotonic
control on the system itself to block the underhand
attacks that hidden coalitions are attempting to carry
out.
More specifically, we consider multi-agent sys-
tems whose security properties depend on the val-
ues of sets of logical formulas of propositional logic,
which we call the critical (or security) formulas of the
systems: for concreteness, we say that a system is se-
cure if all the critical formulas are false, and is thus
insecure if one or more critical formula is true. (Of
course, we could also invert the definition and con-
sider a system secure when all critical formulas are
true.) The system agents control the critical formu-
las in that they control the propositional variables that
formulas are built from: we assume that every vari-
able of the system is controlled by an agent, where
the variables controlled by an agent are controlled
just by that agent without interference by any other
agent. The actions performed by each agent consist
thus in changing some of the truth values of the vari-
ables assigned to that agent, which means that the
values of the critical formulas can change due to ac-
tions performed by the agents, including in particular
malicious insider agents who form hidden coalitions
to attack the system by making critical formulas be-
come true. Returning to the Facebook example, this
is exactly what happens when agents report the origi-
nal profile as fake by setting the flag (clicking on the
link).
1
At each instant of time, agents ask the system to
carry out the actions they wish to perform, i.e. chang-
ing the truth value of the variables they control, and
the system has to decide whether to allow such ac-
tions, but without knowing of the existence of possi-
ble hidden coalitions and thus at the risk of the sys-
tem becoming insecure. To block such attacks, we
formalize here a deterministic blocking method, im-
plemented by a greedy algorithm, which blocks the
actions of potentially dangerous agents. We prove
that this method is sound and complete, in that it does
not allow a system to go in an insecure state when
it starts from a secure state and it ensures that ev-
ery secure state can be reached from any secure state.
However, this algorithm is not optimal as it does not
block the smallest set of potentially dangerous agents.
We thus introduce also a non-deterministic blocking
method, which we obtain by extending the determin-
istic method with an oracle to determine the minimum
set of agents to block so to ensure the security of the
system. We show that the soundness and complete-
ness result extends to this non-deterministic method.
We also calculate the computational cost of our
two blocking methods. This computational analysis
is completed by determining upper bound results for
the problem of finding a set of agents to be blocked
so to prevent system transitions into insecure states,
1
In this paper, we do not consider how the administra-
tor decides to close the profile, nor do we consider in de-
tail the non-monotonic aspects of how agents enter/exit/are
banned from a system or enter/exit a hidden coalition, or
how members of a hidden coalition synchronize/organize
their actions. All this will be subject of future work.
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
312