abstractions shows significant benefits in flexibility
and adaptivity required in most simulation scenarios.
Therefore, in this work there is a focus of
simulations performed on VMs and their
parallelisation. Considering large-scale ABS, the
large-scale distributed Web with millions of servers
and billions of user is an attractive distributed
machine for simulation.
The agent model itself poses inherent parallelism
due its low degree of coupling to the processing
platform and between agents. Interaction between
agents commonly takes place with well-defined
message-based communication. e.g., by using
synchronised tuple spaces. Therefore, the agent
model is an inherent parallel and distributed
processing model relying on a distributed memory
model (DMM) natively. But in simulation worlds a
shared memory model (SMM) is often used for
efficient and simplified agent interaction and
communication. Typical examples for SM-based
Multi-agent systems (MAS) are NetLogo (Tisue,
2004) or SESAM (Klügl, 2006). Commonly, agent
models used in simulation cannot be deployed in
real computing environments. In addition to ABS
there is Agent-based Computation (ABC),
commonly involving totally different agent
processing platforms (APP) and agent models.
The starting point of this work is an already
existing unified agent model that can be used for
ABS and ABC in real-word data processing
environments, too (Bosse, 2019). Simulation of
MAS is performed by using the same platform for
ABS and ABC, the JavaScript Agent Machine
(JAM) (Bosse, 2020), which can be processed by
any generic JavaScript (JS) VM like nodejs or by a
Web browser (e.g., spidermonkey). Application of
parallelisation to VM is difficult and is limited to
some special cases. Most significant barrier for
parallelisation in VMs is the automatic memory
management (AMM) and garbage collection (GC)
prohibiting SMM. Parallelisation is here considered
as a synonym for distribution of computation and is
no further distinguished in this work.
The JAM platform supports already distributed
loosely coupled platform networks, i.e., a set of
nodes ℕ={N
1
,N
2
,..,N
n
} is connected by an arbitrary
communication graph G=〈ℕ,ℂ〉 that connects nodes
by point-to-point communication channels ℂ={c
i,j
}.
But there is no DMM, agents on different nodes are
independent. JAM agents are capable to migrate
between nodes (by code and data snapshot check-
pointing and migration). This feature implements
some kind of a distributed memory virtually, but
without any central managing instance or group
communication. Basically, an agent carries some
isolated region of the distributed memory and
memory access is only possible by agent
communication (using TS/signals). JAM networks
are inherently distributed by strict data and control
decoupling, and there are no shared resources among
the set of nodes. Up to here we have a well scaling
distributed network. Indeed there is no upper bound
limit of connected nodes.
The already existing Simulation Environment for
JAM (SEJAM) (Bosse, 2019) extends one physical
JAM node with a visualisation layer and simulation
control and enables simulation of real-world JAM
agents situated in an artificial two-dimensional geo-
spatial simulation world. Additionally, the JAM
node of SEJAM can be connected to any other
external JAM node providing real-world-in-the-loop
simulation (i.e., agents from a real-world vehicle
platform connected via the Internet can migrate into
the simulation world and vice versa!). Virtualisation
of JAM nodes enables simulation of JAM networks
by SEJAM. In contrast to pure computational JAM
networks the simulator couples its JAM nodes by
shared memory (SM) tightly and is connected to all
parts of the JAM node including direct agent access.
Transforming this SM to a distributed memory (DM)
architecture would cause significant Interprocess-
Communication (IPC) costs by messaging limiting
the speed-up.
In this work three main strategies are applied and
evaluated to provide an almost linear scaling of the
speed-up for large-scale distributed simulations:
1. Strict decoupling of visualisation and
simulation control from computation (of
agents and platforms);
2. Adding a Distributed Object Memory layer
(DOM) to the existing JAM platform to
enable distributed but coupled JAM node
networks with distributed shared objects
and virtualisation;
3. Mapping of simulation entities (virtual
platforms and agents) on multiple coupled
physical platforms by preserving spatial
and communication context (environmental
and agent distribution based on principle
discussed in (Rihawi, 2014));
In (Šišlák, 2009) a basically similar approach
was applied to large-scale agent systems using signal
and message communication for node coupling, but
limited to local agent interaction. The approach
presented in this work poses no communication
range limitations. To understand the challenges and
pitfalls of different approaches a short introduction