made for them are very specific as good results have
been observed in previous work.
In literature one can find similar models based on
observable markers such as tags which are evolved
over time. In the work of Riolo et al. coopera-
tion can only occur between two agents a and b if
|τ
a
−τ
b
| ≤ T
a
holds, where τ
a
is the tag value and T
a
is
a similarity threshold (Riolo et al., 2001). Hales also
made experiments based on this mechanism to deter-
mine cooperation (Hales, 2002; Hales, 2004). The
difference to our work is that on the one hand adap-
tation means copying the value and threshold as well
as the strategy of another agent. In our scenario the
agents are only allowed to imitate the values without
purely copying. Another aspect is that we will deal
with a set of such inequations that all have to be ful-
filled. Hales and Riolo et al. lack formal analysis why
the cooperation emerges but only give experimental
results. We will formally show different cases where
cooperation may and may not emerge.
De Weerdt et al. (de Weerdt et al., 2007) calculate
task allocations using a distributed algorithm in a so-
cial network. A social network is a graph where the
nodes represent the agents and the edges model possi-
ble interaction links. The tasks are assigned to agents
which have limited resources. They show that the
problem of finding an optimal task allocation, which
maximizes the social welfare, is NP-hard. In con-
trast to the work presented here, their model does not
consider cooperation costs and the agents also know
about all tasks before the decision process is started
which is also different to the work presented here.
Another difference is the static social network struc-
ture. In contrast, we analyze dynamic networks and
show that the challenges of those networks favors the
cooperation between the agents.
2 SCENARIO DESCRIPTION
In this section we describe the formal model used
in this paper. Due to page limitations we will only
describe the features of the model and omit the for-
mal definitions. They can be found in (Eberling and
Kleine B
¨
uning, 2010a). We will first define the basic
model and then describe the considered scenario.
The agents in our model are linked together and
form a so called interaction network IN. Basically,
the interaction network IN = (A,N ) is an undirected
graph with a finite set of agents A as the nodes and
a set of links N . The links between the agents
represents the neighborhood relationship. Therefore,
agents a and b are able to interact iff there exists an
edge between them in the interaction network, i.e.
{a,b} ∈ N . An interaction network is called dynamic
if the graph can change between successive simula-
tion steps. Note that due to the interaction network
the agents’ view of the system is local only.
In our system the agents have to fulfill different
jobs consisting of smaller tasks. Each task requires a
specific skill out of a skill set s
t
∈ S and leads to a non-
negative payoff q
t
∈ R
+
0
if the task is fulfilled. There-
fore, a task t can be modeled as a pair t = (s
t
,q
t
).
Let T be the finite set of all possible tasks. Then
J ⊆ Pow(T ) is the set of all jobs. Hence, a job j ∈ J
is a set of tasks j = {t
1
,...,t
n
} where t
min
≤ n ≤ t
max
with t
min
,t
max
∈ N denote the minimum and maximum
number of tasks a job consists of and n the number of
tasks. The payoff for a job is the sum of the tasks’
payoffs if it is fulfilled, i.e. if all tasks are fulfilled,
and zero otherwise.
The environment env the agents are situated in is
a tuple env = (S, P ,IN, J ) where S is a finite, non-
empty set of skills, P = {p
1
,..., p
m
} is a set of propo-
sitions, IN = (A,N ) is an interaction network and J
is a finite set of jobs. The set of propositions are a
mean to model the decision process to determine co-
operation partners based on many criteria. The agents
share the set of propositions that are part of the en-
vironment. These propositions can be opinions about
the overall world state or the evolution of the environ-
ment. As we do not concentrate on the modeling of
such propositions we do not provide a formal defini-
tion. A proposition p can represent anything like “The
road is clear” in the context of a taxi-driving agent or
“The color blue is prettier than black”. For our pur-
poses it is enough to know that there are propositions
that may influence the behavior of the agents. More
details can be found in (Eberling and Kleine B
¨
uning,
2010a).
An agent a ∈ A is a tuple a = (S
a
,N
a
,
C
a
,V
a
,Θ
a
) where S
a
⊆ S is the set of skills agent
a is equipped with, N
a
⊆ A is the agent’s neighbor-
hood defined by the interaction network, C
a
⊆ N
a
is
the set of neighbors, agent a is willing to cooperate
with, V
a
∈ [0,v
max
]
m
⊂ Q
m
is a vector giving values
to the propositions and finally Θ
a
∈ (0,Θ
max
]
m
⊂ Q
m
is a threshold vector. To keep the agents as simple
as possible, only the proposition-values are modeled
as observable properties. All other parts of the agents
(i.e. skills, thresholds and neighbors) are not visible to
other agents and constitute private knowledge. Based
on the values the agents give to the propositions their
cooperation partners are determined. The set of coop-
eration partners C
a
of agent a are all neighbors b ∈ N
a
for which the following holds:
∀p ∈ P : |V
a
(p)− V
b
(p)| ≤ Θ
a
(p) (1)
This means that for the cooperation partners the dis-
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
168