trust-based cooperation system reaches its limitations.
As a result, a change of norms can adapt the overall
system to a dynamically changing environment.
The remainder of this paper is organised as fol-
lows: Section 2 explains the application scenario. Af-
terwards, Section 3 introduces our adaptive control
loop to control norms in such a self-organised OC
system. Section 4 describes the challenges to observe
the system state in this application scenario. Based on
this system model, Section 5 evaluates different met-
rics in simulation. Finally, Section 6 summarises the
paper and gives an outlook to future work.
2 TRUSTED DESKTOP GRID
In this work, we analyse how to deal with open dis-
tributed systems. To understand such systems, we use
multi-agent systems and model nodes of the system
as agents. Our application scenario is an open dis-
tributed Desktop Grid System. We want participating
agents to cooperate to gain an advantage. Every agent
works for a user and periodically gets a job, which
contains multiple parallelisable work units. Its goal
is to get all work units processed as fast as possible
by requesting other agents to work for it. The perfor-
mance is measured by the speedup:
speedup =
time
sel f
time
distributed
(1)
In general, agents behave selfishly and only cooperate
if they can expect an advantage. They have to decide
which agent they want to give their work to and for
which agents they want to work.
Since we consider an open system, agents are au-
tonomous and can join or leave at any time. If no co-
operation partners can be found, agents need to calcu-
late their own work units and achieve a speedup value
equal to one. We do not control the agent implemen-
tation, so they may be uncooperative or even mali-
cious and there can not be any assumption of benevo-
lence. Such a system is vulnerable to different kinds
of attacks. A Freerider can simply refuse to work for
other agents and gain an advantage at the expense of
cooperative agents.
The global goal is to enable agents, which act
according to the system rules, to achieve a good
speedup. We measure the global goal either by the av-
erage speedup of the well-behaving agents or by the
amount of cooperation (eq. 2) combined with the av-
erage submit-to-work-ratio of all agents (eq. 3).
cooperation =
n
∑
i=1
n
∑
j=1
ReturnWork(A
i
, A
j
) (2)
f airness =
n
∑
i=1
min(submit
i
− work
i
) (3)
To overcome the problems of an open system, we
introduced a trust metric (Klejnowski et al., 2010).
Every agent gets ratings for every action it takes. This
allows us to make an assumption about the general be-
haviour of an agent based on its previous actions (Kle-
jnowski et al., 2010). In our system, we give agents
a good rating if they work for other agents and a bad
rating if they reject or cancel work requests. As a re-
sult, we can isolate malevolent agents and maintain a
good system utility in most cases. We call this system
a Trusted Desktop Grid (TDG) (Bernard et al., 2010).
We consider the following agent types in our sys-
tem:
• Adaptive Agents - These agents are cooperative.
They work for other agents who have good repu-
tation in the system. How high the reputation has
to be generally depends on the estimated current
system load and how much the queue of the agent
is filled up.
• Freerider - Such agents do not work for other
agents and reject all work requests. However, they
ask other agents to work for them. This increases
overall system load and decreases the utility for
well-behaving agents.
• Egoists - These agents only pretend to work for
other agents. They accept all work requests but
return fake results to other agents, which wastes
the time of other agents. If results are not vali-
dated, this may lead to wrong results, otherwise,
it lowers the utility of the system.
• Cunning Agents - These agents behave well in the
beginning but may change their behaviour later.
Periodically, randomly, or under certain condi-
tions they behave like Freeriders or Egoists. This
is hard to detect and may lower the overall system
utility.
We simulate an attack by adding new malicious
agents to the system at startup or during runtime.
Since these malicious agents distribute their work, the
speedup for well-behaving agents decreases. How-
ever, those agents get bad ratings such that their rep-
utation in the system is reduced. At this point, other
agents stop to cooperate with these isolated agents.
We try to minimise the impact and duration of these
disturbances, but they still decrease the system utility
(Bernard et al., 2011).
One special problem of attacks by Freeriders is
that they create a large amount of bad ratings in
the system. In general, it is easy for agents to
detect Freeriders, because they do not accept any
work. When agents detect a Freerider, they refuse
ImplementinganAdaptiveHigherLevelObserverinTrustedDesktopGridtoControlNorms
289