be more than one time-step in future than the others.
3.6 Parallel and Fair
Controlled scheduling with multiple processes en-
sures fairness by constraining agents actions through
an explicit synchronisation at each time-step. But,
as several (physical) processes are available, perfor-
mances are generally increased because computations
implied by agents reasoning can now be executed in
parallel. In this implementation, the main simulation
loop is restricted to a simple synchronisation barrier
that waits after all agents actions. The choice made on
straight or deferred action execution implies the same
consequences as in the case of controlled scheduling
with one process.
With the advent of multi-core architecture, this ap-
proach can leverage the raw computing power avail-
able in current hardware infrastructure. Nevertheless,
creation and switching process costs have to be mea-
sured and balanced with behaviours evaluation costs
in order to really obtain interesting speedup. In case
of multi-core CPU, this approach do not imply im-
portant code refactoring, but if the execution target
is a GPU, the translation is not easy. If special care
is taken on agent’s reasoning to simplify it as a fi-
nite state automata, this approach can scale on GPU
infrastructure as demonstrated by the FLAME-GPU
framework (Richmond Paul, 2009).
The figure 3 illustrates a classical implementation
where each agent has its own thread and where a syn-
chronisation barrier is used to guarantee equity in talk.
Thus, in each loop, all agents are waken and have to
proactively store their action in a shared resource.
Environment env = c r ea t e A ndI n i t ( ) ;
List <Agent> a g ent s = cr e a t eA n d I ni t ( ) ;
List <Action> a c t i o n s = i n i t ( ) ;
/ / Launching a l l agents th r ead s
for ( Agent agent : c u r r e n t ){
new Thread ( agent ) . s t a r t ( ) ;
}
while ! si m u la t i o nF i n i she d ( ) {
List <Agent> acti veAg ents = mix ( agent s ) ;
List <Agent> c u r r e n t =
choose ( a ctiv eAge nts ) ;
wa itA l lAg e nts A cti on s ( ac tiv e Age n ts ) ;
env . apply ( act i o n s ) ;
up dat e Pop u lat i on ( age nts ) ;
updateProbes ( environment , age n ts ) ;
}
Figure 3: A parallel and fair classical implementation.
Prey-predator. As this model do not need simul-
taneity, special care should be taken to avoid that two
agents of the same neighbourhood act in parallel. It
could lead to some artefacts like a wolf trying to eat
a sheep that is no more present at execution because
it has simultaneously moved. This problem can be
easily solved by decoupling action gathering from ac-
tion execution and by giving priorities to wolves over
sheep. Game of Life: as with prey-predator, it is nec-
essary to defer action execution otherwise cells from
different time-step are mixed. Speedup should not
be so interesting in this specific model because cells
computation are not costly. Unless a specific imple-
mentation under a GPU with an environment com-
pletely embedded within GPU memory (Perumalla
and Aaby, 2008) is used, sequential versions should
be faster than a parallel one. Stock Market: in con-
trary to prey-predator, as no simultaneity is possible
in this model, there are no issue if two agents act si-
multaneously as there will always have an order that
arrive before another within an order book. But the
fact that traders can run in parallel imply that some
gain could be observed for costly trading behaviours.
Nevertheless, processes synchronisation costs reduce
the gain that could be obtain with the last approach.
3.7 Parallel and Unfair
Finally, uncontrolled scheduling and multiple pro-
cesses can be seen as a special kind of individual-
based simulators where focus is put on real-time sim-
ulation. As no scheduling is done on agents and
actions can occur simultaneously, this approach is
adapted to real-time interactive simulations. Illustra-
tions of this special kind of simulations are mainly
related to Massively Multi-player Online Role Play-
ing Game (MMORPG) or serious games (pedagogi-
cal games). This level of parallelism enable scaling
in agents number and in response time. It should be
noted that in such settings, questions of reproducibil-
ity or fairness are no more pertinent. This context
should mainly be used to enable human-in-the-loop
simulations in a real-time setting, which is the case in
games and serious games. Equity in talk has another
meaning here, and virtual agents should be slow down
in order to enable humans to react in the same timing
as virtual agents.
Prey-predator: in this setting, the only problem
that can occur is simultaneous modification of ad-
jacent agents. It could be solved by some locking
mechanism to ensure that simultaneity cannot occur
in these situations. Game of Life: again, in this con-
text, the model cannot be guaranteed unless strong
synchronisation and deferred action execution is en-
forced. Such move would reduce any gains that could
ICAART 2012 - International Conference on Agents and Artificial Intelligence
174