Another benefit of static code analysis: no finite
number of runs can deduce what is possible. Not test-
ing, execution traces or animations can necessarily
discover all things that are possible in some simula-
tion run. However, from static analysis, one can re-
veal the possibility of infrequent situations occurring.
Data flow techniques also can be used to deter-
mine if race conditions exist. In a common implemen-
tation of a discrete event simulation, an events list is
checked before the clock is advanced. Ideally, the or-
der in which events that are scheduled to occur at the
same time happen to be on the list should not matter
to the simulation results; however, analysis can flag
this possibility – or point out a potential surprise to
the modeler.
2.2 Insights – Dynamic Code Analysis
Static code analysis has its limitations; it can identify
possible actions but since the possible complexity of
code is unbounded, the results of the analysis can on
rare occasions be misleading. From static analysis,
one may discover that event A can cause event B, but
dynamic analysis often can reveal specifics of which
events caused which events – that is, which event
caused a particular event, and which event(s) a par-
ticular event caused – which cannot always be deter-
mined prior to run-time. In addition, if static analysis
suggests that event A can cause event B, but dynamic
analysis reveals that this never happens, this may be
of interest to a modeler or user of the simulation.
The ancedote in Section 1 about the grouping of
events by frequency of occurrence is another example
of useful information revealed through dynamic anal-
ysis.
2.3 Insights – Examining the Model
Differently, Interactively
Alas, source code tends to involve many issues un-
related to the model itself, such as data collection,
animation, and tricks for efficient run-time behav-
ior. Unless the modeler is an expert programmer,
this other code tends to obscure the model as imple-
mented. Consider if one were able to be shown only
what one considered currently relevant.
Weinberg identified the importance of program lo-
cality (Weinberg, 1971), the property obtained when
all relevant parts of a program are found in the same
place. He noted that “...when we are not able to find a
bug, it is usually because we are looking in the wrong
place.” Since “issues of concern” vary widely, no sin-
gle organization of a program can exhibit locality for
all such concerns. Additionally, as the problem of
interest changes, the information considered relevant
also changes.
Consider if there were multiple model “views,”
where one could see only the aspects of (current) in-
terest, and as the aspects of interest changed, so could
what was shown to the modeler or model user. Per-
haps these “slices,” not unlike Weiser’s programslices
(Weiser, 1984), could aid in allowing model charac-
teristics to be more easily understood.
An additional interactive possibility – a zoomable
action cluster interaction graph – is discussed below.
2.4 Insights – Visualizing the Model
When given an unfamiliar model to modify or use,
modelers and model users traditionally examine text
output, source code, and perhaps animations if avail-
able. Animations aside, most analyses are not par-
ticularly visual, a shame since pictures help us build
mental models (Glenberg and Langston, 1992) – in
this context, a mental model of the encoded model.
Some work has been done on the “visuality” of
models. Program visualization (which is distinct
from visual programming – “the body of techniques
through which algorithms are expressed using vari-
ous two-dimensional, graphic, or diagrammatic nota-
tions,” such as Nassi-Schneiderman diagrams (Nassi
and Schneiderman, 1973)) has similarly stated goals:
“to facilitate a clear and correct expression of the
mental images of the producers (writers) of computer
programs, and to communicate these mental images
to the consumers (readers) of programs” (Baecker,
1988). However, the approach to doing so “focuses
on output, on the display of programs, their code, doc-
umentation, and behavior” and “[displaying] program
execution in novel ways independent of the method of
specification or coding” (Baecker, 1988).
A prime problem with model descriptions,
whether in textual or graphical notations, is that even
for simple models, the descriptions are often difficult
to fully comprehend. Even in relatively simple cases,
the wealth of data available can easily obscure useful
information in the volume of what is presented. The
type of tools for which we argue may help with the
problem of having “too much information” by allow-
ing the interactive exploration of a model so that only
relevant information is presented.
An information mural (Jerding and Stasko, 1998)
is a technique for displaying and navigating large in-
formation spaces. The goal of the mural is to visu-
alize a particular information space, displaying what
the user wants to see and allowing the user to focus
quickly on areas of interest. As Jerding and Stasko
aptly state, “A textual display of such voluminous in-
ENHANCING UNDERSTANDING OF MODELS THROUGH ANALYSIS
323