completely filled and the machine has to wait for
the W T timeout to expire, it will still run at a
higher percentage of fill. If there are no semi-
filled batches of the current lot type, Baseline
chooses the queue with the least overall length (re-
gardless of lot type) to start the new batch.
take-from-queue on single lot oriented machines
FIFO.
take-from-queue on batch machines if there is a
full batch of a type t (or fills up during the W T
timeout), this batch is taken. If there are several
full batches in the queue, one is chosen at random.
3.4 Log File Output
When SwarmFabSim is run from BehaviorSpace, it
produces four different types of result log files, which
are all in a plain text, comma-separated value format:
*-table.csv this is the standard Netlogo log file. It
logs a line for every tick; in our configuration it
writes the following values: the run number, the
chosen swarm algorithm, the chosen allocation
strategy, whether a config file was used and its file
name, whether debugging and visualization mode
were turned on, the tick number, and the average,
max, and min queue length in the system at this
tick. As this file logs one line for each tick, its
influence on simulation performance can be no-
ticeable, especially if the medium on which the
log files reside is slow.
*-lmvs.csv this is a custom log file that tracks lot
movements between machines in detail and logs
a line for every tick and lot movement. It logs the
run number, the current tick, the mode of move-
ment (lot got put into a queue, moved into a ma-
chine, moved out from a a machine), the lot type,
the unique lot number, the position of the recipe
pointer (how far advanced along the recipe the lot
is), the machine type, the unique machine id and
a slot number in case of a batch machine. Due to
the high level of detail in this log file it can be-
come quite large. Since it logs a line for every
lot movement that occurred at every tick, so po-
tentially many lines for each tick, writing this log
file has the biggest influence on simulation perfor-
mance.
*-kpi.csv this is a custom log file that is written once
at the end of every simulation run. It logs the
run number, the average flow factor, tardiness, and
machine utilization, plus the total makespan (the
simulation duration in ticks). Since this file logs
only one line at the end of each simulation run, its
influence on simulation performance is low.
*-lots.csv this is a custom log file that is written once
per run and lot at the end of every simulation run.
It logs the run number, the lot type, the unique
lot number, the total queuing time, raw processing
time, and total processing time for the lot, plus the
overall flow factor and tardiness for this lot. While
this file is written only once at the end of the sim-
ulation run, it can potentially become quite big if
the number of lots is very large. Since this file is
only logged to once at the end of each simulation
run, its influence on simulation performance is not
very large despite the potentially large file size.
Since the *-table.csv and *-lmvs.csv log files have
the most impact on simulation performance, simu-
lations should be performed on machines with fast
drives to store these files so as not to slow down sim-
ulation unnecessarily. A lot of performance can also
be gained by not logging to files whose information
you are not going to analyze; eg. if you do not need
the detailed lot movement information in *-lmvs.csv,
disabling this log file will speed up simulation.
4 EVALUATION
Our paper does not focus on general techniques of
speeding up simulations, such as turning off visual-
ization, using a large multi-processor server with a
fast SSD disk, sampling the parameter space before
starting detailed simulations, or making sure that no
unnecessary calculations are repeatedly performed in
the “go” procedure. Instead, we showcase the per-
formance of current Netlogo implementations by us-
ing the example of our implemented SwarmFabSim
framework, using typical models and one of our al-
gorithms. We measure the runtime of one run with-
out visualizations for several typical large models on
a typical laptop a researcher would have access to.
4.1 Scenario Configuration
We designed our test scenario inspired by both, the
aforementioned real-world data of between 400 and
1200 different stations in a fab which produces prod-
ucts in around 300 different process steps and on the
SMT2020 testbed (Kopp et al., 2020). The SMT2020
datasets contain up to 105 workcenters (machine
types) and recipes for up to 10 products that are be-
tween 242 and 583 steps in length.
Our test scenario uses a fab of 100 workcen-
ters (machine types) of between 4 and 12 machines
each and 10 product types that have recipes of 250
to 350 steps in length. We then load this fab with
100, 1000, 5000, and 10K lots (divided among the
SIMULTECH 2024 - 14th International Conference on Simulation and Modeling Methodologies, Technologies and Applications
302