ability to capture prey (Dawkins, 2006).
As opposed to the previous transition, the envi-
ronment seems to play a key role for emergence of
groups.
We argue that group creation can be supported
also in an environment that does not favor groups and
in presence of within-group competition. In this case
the driver of group creation is sociable behavior, that
individuals develop in response to (lack of) informa-
tion in the environment.
2 METHODS
Group formation is all about interactions, and it is an
extremely difficult problem to capture its dynamics in
a mathematical model. For this reason we study this
problem by means of a computer simulation which is
able to reproduce a vast range of dynamics.
We design an environment that does not give an
advantage to group behavior as every agent competes
for the same resources, which in our case is food.
The simulation environment is a squared grid with
circular boundary conditions, each cell can contain a
variable number of food units and agents. The number
of cells containing food is defined by a parameter and
remains constant during the simulation: anytime a
food source is depleted, a new food source is spawned
at a random location. The continuous respawning of
food sources is the mechanism that tests the quality
of an agent’s foraging strategy. The food source ca-
pacity (i.e. the maximum number of food units that
a cell can contain) is set to a value, sufficiently high
to avoid that a single agent could exaust it before any
other agent can find it. We will discuss in the next
section the effects of this parameter on our findings.
Agents are initially randomly placed in the grid.
Agent perceptions include the food in the current cell
and the agents in the surroundings. The perception
mechanism mimics the working of the retina in fish
(Strandburg-Peshkin et al., 2013): perceptions indi-
cate the number of agents in every of the cardinal di-
rections, but it is not refined enough to tell the exact
number of agents in a specific cell, nor distinguish
between stationary and moving targets. Available ac-
tions are foraging and moving of one cell in one of the
four cardinal directions.
At every simulation step agents activate in a ran-
dom order and execute one action based on their per-
ception. The order of play is crucial as agents com-
pete for the same resources with a first-come first-
served policy. Whenever an agent depletes a source
of food, a new source is immediately created some-
where else. Any other agent in that cell that is still
waiting in queue would have now to look for a new
source, the strategy they use to look for new sources
of food makes the difference for their performance.
Agent decisions are based on the output of a sim-
ple neural network that transforms a vector in the per-
ception space to a vector in the action space. The
agent executes the action that corresponds to the high-
est value of the output vector (see Figure 1). At this
stage the learning is disabled, so the weights of the
network remain constant for the whole simulation.
Figure 1: Diagram of the decision system. The weights
associated to each connection produce a vector of output
values, the action with the highest value is executed.
An important assumption is that location of food
is unknown and is disclosed to one agent only upon
entering its location. This assumptions makes the
task not-trivial: being unable to perceive food in a
distance, the only chance to improve over a random
walk is to find a foraging strategy that exploits a proxy
for food location. The intuition is that, assuming all
agents stop to eat whenever they encounter a food
source, cells with more agents are more likely to be
food sources. We call a strategy, exploiting the sig-
nal of position of others, “sociable”. Sociable strat-
egy interprets the presence of agents as increasing the
likelihood of food
In our simulation we define two types of agents:
random and sociable (SAs). Random agents, as the
name suggests, walk randomly in the environment
searching for food, independently of where other
agents are. Their behavior is determined by random
noise, added to the randomly initialized weights. So-
ciable agents like company: they favor going towards
where other agents are. For example if their percep-
tion shows the majority of agents to the east, the agent
will move to the east. They are generated from a ran-
dom agent, by increasing the weight that connect a
specific input value to a desired action. The only dif-
ference between types of agents is the average value
of the weights. All agents are initialized to have a
very high weight on the edge that connects percep-
tion ”food in the current cell” to action ”forage”, this
makes sure that agents will always forage when given
the opportunity. This assumption does not remove
generality as, in an evolutionary perspective, it is ex-
pected that agents forage whenever possible.
ICAART 2016 - 8th International Conference on Agents and Artificial Intelligence
232