The Complexity Crisis
Using Modeling and Simulation for System Level Analysis and Design
Franc¸ois E. Cellier
1
, Xenofon Floros
1
and Ernesto Kofman
2
1
Swiss Federal Institute of Technology (ETH Zurich), Department of Computer Science, Zurich, Switzerland
2
National University of Rosario (UNR), Department of Automatic Control, CIFASIS-CONICET, Rosario, Argentina
Keywords:
System Level Modeling, System Level Simulation, Modeling and Simulation Environments, Error Tolerant
Modeling, Decentralized Simulation.
Abstract:
This paper discusses the current state of the art of modeling and simulation environments and proposes a set of
enhancements that will be necessary for such environments to meet future demands. Modeling and simulation
are categorized in accordance with the size of the models to be handled: component level modeling, device
level modeling, and system level modeling. It is shown that the requirements that modeling and simulation
environments need to satisfy in order to meet the demands of modelers are vastly different at these three
levels.
1 INTRODUCTION
In the 1970s and1980s, modelers were mostly content
simulating systems at component level. They stud-
ied the performance of a particular type of motor, for
example. Typically, such models were characterized
by half a dozen state variables and a few dozen al-
gebraic variables. A typical general purpose model-
ing and simulation (M&S) environmentsused in those
years was ACSL (Cellier, 1991; Mitchell and Gau-
thier, 1986). ACSL offered vertical sorting of equa-
tions and a (rather primitive) Macro language that en-
abled users to group the equations of sub-components
together. As of the early 1980s, ACSL also offered a
(fairly limited) set of tools for discontinuity handling.
However, this was not the important point. The most
important feature of ACSL and similar environments
was that they freed the modeler from having to under-
stand numerical ODE solution algorithms. The mod-
eler was given a selection of integration algorithms
to choose from, but did not need to understand how
the model was being simulated. ACSL supported
a strict separation of the model equations from the
solver equations, and the modeler could thus focus on
those aspects that he or she understood well, namely
describing the model in mathematical terms.
For models of larger size, tools like ACSL were
mostly useless. Larger models coded in ACSL turned
quickly into spaghetti code similarly bad as if they
had been coded in Fortran or C. Most modelers real-
ized this, and therefore, larger models were mostly
coded in general purpose programming languages,
such as Fortran. Also, many physical systems do not
lend themselves to causal modeling. The computa-
tional causality of the equations governing these sys-
tems depends on their embedding, and consequently,
causal modeling is doomed to failure. For example,
an electrical resistor is characterized by Ohm’s law:
u = R · i (1)
but the equal sign must not be interpreted as an as-
signment equal sign. Depending on the embedding
of the resistor in its surrounding circuit, the law may
need to be turned around, leading to the equation:
i =
1
R
· u (2)
For this reason, ACSL could not be used to model and
simulate electrical circuits, for example.
Special purpose modeling languages were de-
signed to offer support for modeling particular classes
of implicitly formulated models, such as Spice (Tu-
inenga, 1995) for the simulation of electrical analog
circuits, or Adams (Ryan, 1990) for the simulation
of mechanical multi-body systems. These tools were
quite successful and found wide-spread use, but they
were also limited in their applicability. If a particular
component model had not been foreseen by the de-
signers of the tool, such as maybe a Zener diode in
Spice, the tool became quickly useless.
IS-5
E. Cellier F., Floros X. and Kofman E. (2013).
The Complexity Crisis - Using Modeling and Simulation for System Level Analysis and Design.
In Proceedings of the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications, pages IS-5-13
DOI: 10.5220/0004986700050013
Copyright
c
SCITEPRESS
The 1990s saw the advent of a new class of M&S
environments that were based on the object-oriented
modeling paradigm. They offered horizontal next to
vertical sorting, thereby enabling the modeler to for-
mulate the model equations in an a-causal fashion.
Although a prototype of an object-oriented M&S en-
vironment had already been implemented in the late
1970s (Elmqvist, 1978), it took 15 more years until a
commercialand more mature implementation of these
important new concepts had become available in the
form of a commercial release of the Dymola language
(Cellier, 1991; Cellier and Elmqvist, 1993). In 1996,
the Dymola language was revised once again, and an
open standard under the name of Modelica was intro-
duced (Mattsson et al., 1997).
The Modelica initiative turned into a true
success story, and Modelica has meanwhile be-
come the de facto industry standard for model-
ing and simulating all types of physical lumped
parameter systems. The Modelica Association
(https://www.modelica.org/association) maintains the
Modelica language definition (Modelica Association,
2007) as well as the Modelica Standard Library
(MSL) (Modelica Association, 2008), to which many
researchers from around the globe contribute on a reg-
ular basis, and which by now offers large numbers of
models from many different application domains. In
the meantime, the MSL has become as valuable as if
not more valuable than the language definition itself,
and a number of both commercial and open source
implementations of the Modelica language are being
offered that all work with the MSL. Modelica con-
ferences are being held in frequent intervals that at-
tract each time several hundred attendees, and large
research projects to the amount of tens of millions of
Euros have been funded to further enhance the Mod-
elica language and the MSL.
By means of the object-oriented modeling
paradigm that is being embraced by Modelica, it has
become possible to connect component models reli-
ably to form larger models; thus, Modelica has be-
come the tool to use for device level modeling and
simulation. Typical Modelica models now are com-
prised of several dozens of state variables and several
hundreds of significant algebraic equations, after all
of the trivial connection equations of the type:
a = b (3)
have already been eliminated.
Most Modelica implementations offer sophisti-
cated symbolic preprocessing, including algorithms
for index reduction (Cellier and Elmqvist, 1993; Pan-
telides, 1988; Zimmer, 2010), tearing of algebraic
loops (Elmqvist and Otter, 1994; Zimmer, 2010), dis-
continuity handling (Cellier, 1979; Elmqvist et al.,
1993; Otter et al., 1999), and static as well as dy-
namic state selection (Mattsson et al., 2000), to the
extent that the generated simulation code bears lit-
tle resemblance to the original model equations any
longer. The generated simulation code is at least as ef-
ficient as if not more efficient than the spaghetti codes
of the past, and Modelica models can even compete in
simulation efficiency with the special purpose model-
ing languages (Spice, Adams) designed earlier.
Information hiding is one of the most important
features of object-oriented modeling. Implementation
details are hidden inside component models that the
end user rarely needs to consult. A graphical user
interface (GUI), spearheaded once again by the de-
signers of Dymola but later standardized as part of
the Modelica language, offers the modeler the ability
to hierarchically structure his or her models in such
a way that the implementation details of the compo-
nent models of lower levels of the hierarchy are en-
capsulated and hidden from users who interface with
their model at a higher level of the modeling hierar-
chy. At each abstraction level, the models can be de-
signed to fit on a single screen, which helps signifi-
cantly with model understandability, debugging, and
maintenance.
Yet, and contrary to the special purpose languages
of the past, all physical knowledge is encoded at the
Modelica modeling level itself as part of the MSL.
Hilding Elmqvist, the originator of Dymola, once
proudly stated:
The most important feature of my language is
that Dymola does not understand physics.
The only knowledge that is hard encoded in the
Modelica compilers is algorithmic knowledge of how
to convert implicitly formulated sets of differential
and algebraic equations (DAEs), possibly containing
structural singularities and algebraic loops and most
likely containing discontinuities, to sets of explicitly
formulated ordinary differential equations (ODEs)
that can be processed by ODE solvers (Cellier and
Kofman, 2006). All of the physical knowledge is soft
encoded in the MSL. This is indeed a very important
feature as it makes Modelica (Dymola) much more
flexible than any of the special purpose languages of
the past.
As models with several dozens of state variables
are almost invariably stiff, most Modelica environ-
ments offer a stiff system solver as their default ODE
or DAE solver. The most commonly used default
solver is DASSL (Petzold, 1983), a code based on a
variable-step, variable-order series of backward dif-
ference formulae (BDF) of different orders of approx-
imation accuracy that are well suited for the simula-
SIMULTECH2013-3rdInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
IS-6
tion of stiff systems. DASSL is chosen as the default
solver, because it is the most robust stiff system solver
on the market today. It can simulate almost any model
thrown at it. It does not always do so in a highly ef-
ficient manner, but robustness counts for much more
than efficiency.
Most Modelica environments also offer a set of
alternative solvers to choose from, but the large ma-
jority of users never select a different solver, because
they don’t understand numerical methods anyway and
therefore have no inkling as to which solver might be
best suited for their particular model. Earlier versions
of Dymola offered the user many compiler switches
that could help the user in optimizing the efficiency
of his simulation runs. All of these compiler op-
tions (except for the selection of the solver itself) have
meanwhile been thrown out again (or at least have
been hidden behind undocumented internal compiler
switches that only some employees at Dynasim know
about), because they only confused their engineer-
ing end users. One should never ask users to supply
pieces of information that they do not understand, be-
cause otherwise, they get frustrated and walk away to
another supplier who does not ask questions that they
don’t know how to answer. Offering a palette of dif-
ferent algorithms is fine, but only as long as the M&S
environment is smart enough to figure out on its own,
which of these methods to use under what conditions.
Neither Dymola nor any other Modelica imple-
mentation has been successful in entering the mar-
ket for simulations of distributed parameter systems.
Although the method of lines (Cellier and Kofman,
2006) makes it easy to convert partial differential
equations (PDEs) to sets of ODEs, the resulting ODE
models cannot be simulated efficiently using Dymola.
The reason is that PDEs, unless we are dealing with
toy problems, invariably lead to ODE models com-
prised of hundreds if not thousands of ODEs, and Dy-
mola is not currently capable of simulating such ODE
systems efficiently, at least not if the resulting equa-
tion systems are stiff.
As the step from simple component level models
involving half a dozen ODEs to device level models
containing several dozens of ODEs opened up a large
set of new questions and called for drastically dif-
ferent solutions, the same happens once again when
we proceed from several dozens of ODEs to several
hundreds or possibly thousands of ODEs, i.e., when
we advance from device level models to system level
models. These models call once again for drastically
different solutions, and that is what this paper is all
about.
2 SYSTEM LEVEL MODELING
We are building ever more complex systems. The first
Airbus 380 was delivered to the customer with a de-
lay of roughly one year, because the producers of the
aircraft were not capable of testing the aircraft in time
in all of its potential modes of operation. The over-
all system has become too complex. The electrical
system alone is of frightening complexity. The Air-
bus 380 cable tree is 500 km long and features tens of
thousands of connections. If we are no longer capable
of testing such an aircraft in all of its potential modes
of operation before handing it over to the customer,
we should at least be able to simulate it using a real-
istic full-scale model of the aircraft capturing all fea-
tures of the vehicle using realistically accurate math-
ematical models. Such a system level description will
definitely feature many hundreds of state variables.
An air-tower training simulator should be able to
simulate not only one aircraft, but the entire fleet of
air vehicles moving about the airport. As airplanes
arrive, their models should become successively more
complex on their own as they approach touch-down,
whereas the models of departing aircraft should be-
come simpler as they disappear from the vicinity of
the airfield. Some pilots may be humans in training,
whereas other pilots are algorithms that form part of
the simulation. The same goes for the air traffic con-
trollers. We need to simulate simultaneously the mo-
tion of the air vehicles, the communication channels,
and even weather patterns, such as wind velocity and
direction, and we should feed the visible scenery in
the form of optical information to the air tower con-
trollers training in the air-tower simulator as well as
to the pilots training in the aircraft simulator that is
located next to the air-tower simulator.
The human body is incredibly complex. At the
current time, we are barely capable of simulating one
single organ in isolation with a halfway satisfactory
degree of realism. Now the call is out for full-body
simulations. This is the only way how we can begin
to understand how our metabolism deals with drugs,
for example; which side effects are being produced in
the process. When a surgeon opens up a patient, the
body reacts violently and in many different ways to
the intervention. The heart beat rises and the blood
pressure may either rise or fall rapidly. Hormones
are being produced and dispatched by many organs
in parallel. When difficult interventions need to be
made, an anesthesiologist supports the surgeon and
tries to counteract the reactions of the patient, keep-
ing the vital signs of the patient within safe bounds.
Yet, the anesthesiologist operates in a strictly reactive
mode with only limited understanding of what hap-
TheComplexityCrisis
IS-7
pens inside the patient. It is only his or her experi-
ence with similar cases that ensures that he or she will
take the correct decisions quickly and reliably. Full-
body simulations may help to improve this situation
without putting a real patient at risk, but we are still
far from being able to offer realistic full-body simu-
lations. What we know for a fact is that these models
will invariably be of high complexity and feature hun-
dreds if not thousands of state variables.
Object-oriented modeling scales generally well. If
we knew all of the equations that we need, we could
probably encode such a complex model in Modelica
without too much difficulties. Yet, even Dymola, the
most advanced of the current Modelica implemen-
tations, would fail miserably in dealing with any of
these models both at compile time and at simulation
time. We shall now discuss why this is the case.
3 VARIABLE STRUCTURE
MODELS
One of the models proposed above mentioned the
need to simplify aircraft models on the fly. Currently,
Modelica does not offer a convenient way of describ-
ing the exchange of models at event times, and Dy-
mola is incapable of compiling and simulating such
models, even if they have been described in an indi-
rect fashion. Neither Modelica nor Dymola are cur-
rently designed to handle variable structure models.
This is a major shortcoming of Modelica, as variable
structure systems are quite common. Any mechanical
system with a clutch is a variable structure system.
When the clutch is engaged, the two axles connected
by the clutch move in unison, and consequently, the
total number of differential equations is smaller than
when the clutch is disengaged and the two axles move
separately one from the other.
Dirk Zimmer recently proposed a remedy to this
shortcomingin his Ph.D. dissertation (Zimmer, 2010).
He designed a variant of the Modelica language, that
he called Sol, that allows to formulate models that can
be activated and deactivated and even duplicated at
event times. He offered a prototype implementation
of a Sol compiler and a prototype run-time environ-
ment to simulate the compiled Sol models.
Yet, this is only a first step. In the example of the
simplified aircraft models, for example, it should not
even be necessary to manually code aircraft models
of different complexity and encode the conditions un-
der which a model change is to occur. In principle,
it should suffice to encode the most complex aircraft
model and leave it up to the compiler to generate sim-
plified versions thereof on its own using automated
model pruning techniques (Sen et al., 2009). Tests
could be automatically generated that look at the dy-
namics of the trajectories produced by the simulation,
and when the fast transients have died out, automati-
cally switch to a simplified version of the model.
4 MODEL DEBUGGING
SUPPORT
Anyone who has already modeled a decently complex
system using Modelica knows that debugging a Mod-
elica model can be quite difficult. Coding a model
is easy, but when the compiler tags the model as sin-
gular, meaning that the number of equations does not
equal the number of unknowns, locating the error in
the model can be a challenge.
Tools were developed that help us reduce the like-
lihood of obtaining cryptic compiler error messages,
e.g. by balancing the model connectors (Olsson et al.,
2008). Yet the problems are still formidable. This
happens in todays device level models. If the size of
the model grows once again by one order of magni-
tude, as this will invariably happen when we proceed
from device level simulations to system level simu-
lations, modeling errors may occur much more fre-
quently, and finding them just by looking at the code
may become a hopeless undertaking. We shall need
help in this task (Pop et al., 2012).
One possible approach might be to invoke auto-
mated model pruning. When the compiler tags a
model as singular, it automatically simplifies one sub-
model after another. If the singularity disappears in
the process, the error is most likely to be found in the
device model that has just been simplified. Thus, the
compiler can offer help in better reporting compile-
time errors.
5 MODEL CONNECTION
COMPATIBILITY SUPPORT
Not all models can work correctly with one another.
The trick used often today to avoid that incompatible
models get connected to each other is to make their
connectors incompatible. If a plug doesn’t fit into a
socket, that is a good indication that the device behind
that plug should not get connected to the system at
this location.
However, the compatibility issues are often more
subtle. We need to be able to check that a variable
never assumes a negative value, for example. It could
also be that a connection between two models is only
SIMULTECH2013-3rdInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
IS-8
valid for positive power flow from the first model to
the second, but not when the power flow is reversed.
It may be that a motor model is only valid up to a cer-
tain maximum rotational speed or a maximum level of
armature voltage. Modelica offers already now assert
statements that can be used to catch various types of
incompatibility issues, but many users are hesitant to
use assert statements on variables rather than param-
eters, because these will slow down the simulation.
Also in the context of variable structure models,
the current assert statements may not be powerful
enough. We must be able to encode under what con-
ditions we are allowed to switch from one model to
another, for example.
Finally in the context of automated model prun-
ing, it is not clear how assert statements need to be
adjusted in the process of model pruning. This issue
needs to be studied.
6 MODEL PLAUSIBILITY
SUPPORT
When checking the results obtained from simulating
a model, we usually only look at those few variables
that are most important to us. If the trajectories gener-
ated by the model look plausible, we accept the model
as probably correct. However, it may happen that
the simulation generates entirely implausible trajec-
tory behavior for some other variables that we failed
to look at. We would have rejected the model had we
looked at any of those trajectories.
The problem gets worse as the size of the model
increases. In a system level simulation, we shall still
not have time to look at more than a handful of tra-
jectories. Thus, we need language support to prevent
implausible trajectory behavior from remaining unno-
ticed. It should be possible to specify behavioral pat-
terns for variables that we expect them to exhibit. If
the simulations fail to generate the predicted patterns,
these variables should get flagged by the simulation
so that the modeler will look at the generated trajec-
tories explicitly before accepting the model as being
plausible.
A similar feature is also needed for fault detection.
In a watchdog monitor (Cellier et al., 1992; de Al-
bornoz, 1996), real-time simulation results are com-
pared with measurement data obtained from the mon-
itored plant. If the simulated trajectories deviate from
the observed behavior in significant ways, an alarm is
issued that alerts the plant operator to a possible fault
in the system (Escobet et al., 1999). In this case, the
model had been validated before, and it is the plant
that is expected to be at fault rather than its model.
7 THE CURSE OF LARGE-SCALE
SYSTEM SIMULATION
Models of large-scale systems are invariably stiff.
Among the state variables simulated, some will al-
ways change faster than others. This is particularly
true for multi-energy domain simulations. For exam-
ple in a mechatronic system, the electrical time con-
stants are always faster than the mechanical ones by
about two orders of magnitude. If the thermal domain
is to be studied as well, thermal states will have time
constants that are slower than the mechanical ones
again by one or two orders of magnitude. Thus, such
multi-energy domain models are always stiff.
ODE solvers that can be used to simulate stiff sys-
tems must be implicit solvers. Explicit solvers will
need to use step sizes throughout the simulation that
are small in comparison with the fastest time con-
stants captured by the model as otherwise the simu-
lation will become numerically unstable (Cellier and
Kofman, 2006). The numerical stability domains
(Cellier and Kofman, 2006) of stiff system solvers
loop in the right-half complex λ ·h plane. This avoids
numerical stability problems during simulation when
simulating analytically stable systems. All explicit
solvers have numerical stability domains that loop in
the left-half complex λ· h plane, and only some (but
not all) implicit solvers feature numerical stability do-
mains that loop in the right-half complex λ · h plane
(Cellier and Kofman, 2006).
Implicit solvers (including all stiff system solvers,
such as DASSL (Petzold, 1983) or Radau (Hairer and
Wanner, 1999)) need to iterate over a number of it-
eration variables in each integration step using New-
ton iteration (Cellier and Kofman, 2006). The itera-
tion variables usually include all state variables plus
a few algebraic (so-called tearing) variables that are
needed to break algebraic loops that remain in the
model (Cellier and Kofman, 2006).
The simulation is formulated in the following
way:
F (z
k+1
) = 0 (4)
where z is the vector of iteration variables to be cal-
culated at the end of the next integration step, i.e., at
time t
k+1
, and F (.) is a set of zero functions that need
to be solved for the unknown iteration variables. The
Newton iteration can be formulated as follows:
z
l+1
k+1
= z
l
k+1
H
1
· F (5)
where H = F /z is the Hessian matrix (Cellier and
Kofman, 2006). We start out with initial guesses of
the values of the iteration variables, z
0
k+1
. The (l+1)
st
TheComplexityCrisis
IS-9
iteration can then be computed from the l
th
iteration
using the above formula.
The Newton iteration can be set up as follows:
H · δz = F (6)
z
l+1
k+1
= z
l
k+1
δz (7)
Thus, inside each Newton iteration of a non-linear
model, a set of linear equations needs to be solved
during each iteration step. The convergence speed
of the Newton iteration, i.e., the number of iterations
needed for convergence, depends heavily on the ac-
curacy with which the elements of the Hessian matrix
are being estimated and on the accuracy with which
the linear equations are being solved.
The Newton iteration is the crucial part of the sim-
ulation as this is where the simulation spends most of
its time. A first improvement over the current state
of the art would therefore be to invoke a linear sys-
tem solver that exploits the multi-core architecture of
modern computers. Also the estimation of the ele-
ments of the Hessian matrix can be easily parallelized.
Different columns of the Hessian matrix can be com-
puted in parallel on separate cores. These perfor-
mance improvements have not been crucial up until
now, because most Dymola models formulated today
at device level simulate within a few seconds of real
time anyway.
Unfortunately, the execution time needed for the
Newton iteration, and thereby for the entire simula-
tion, grows cubically in the number of iteration vari-
ables. Thus, if today’s device level simulations take
seconds to simulate, system level simulations that in-
volve ten times as many iteration variables will take
hours to simulate. This is why Dymola is not yet com-
petitive when simulating distributed parameter mod-
els (Link et al., 2009).
The situation has recently improved by the addi-
tion of sparse matrix solvers to Dymola for dealing
with large sets of linear equations (Braun et al., 2012).
A speed-up of a factor of up to 20 was reported on
some test problems. Yet, this is still insufficient to
make Dymola competitive for the simulation of larger
PDE models.
Future generations of computers will offer larger
numbers of cores, and making use of a linear system
solver and a Hessian generator that fully exploit the
multi-core architecture can push the barrier yet a bit
further, but this alone will not overcome the complex-
ity issue. The simulation time still grows cubically
in the number of iteration variables. For stiff system
simulations to scale well, we shall need to get away
from large Newton iterations.
The multi-core architecture can also be exploited
in other ways. First attempts at distributing transmis-
sion line Modelica models over a multi-core architec-
ture were recently reported by Sj¨olund (Sj¨olund et al.,
2010). Sj¨olund writes in his article that it is very im-
portant that the distribution of the model over the ar-
chitecture be accomplished in a fully automated fash-
ion by the Modelica compiler. Otherwise the tech-
nique will become unmanageable when dealing with
large-scale models. We shall explore this avenue fur-
ther in the next section of this article.
8 QUANTIZED STATE SYSTEM
SIMULATION
In recent years, a new class of ODE solvers has been
developed that is based on state quantization rather
than time slicing (Cellier and Kofman, 2006; Kofman,
2003). Rather than asking the question:
What values will the state variables assume at
the next sampling instant, t
k+1
?
we ask the question:
At what time will the state variable x
i
change
by more than ±δx
i
in comparison with its cur-
rent value?
A quantized state system (QSS) solver operates nat-
urally in an asynchronous mode. Each state variable
carries its own simulation clock. If the states of a sub-
system change very little, the model equations captur-
ing the dynamics of that subsystem will be executed
rarely. In the context of QSS simulations, automated
model pruning may become less of an issue and may
in fact turn out to be unnecessary. A dormant model
occupies space on the disk, but it does not slow down
the simulation, as its equations will not get executed.
As we proceed from device level simulation to sys-
tem level simulation, the relative advantage of QSS
solvers will become more noticeable, as they escape
the curse of complexity.
QSS solvers can easily exploit a multi-core ar-
chitecture. Since each state variable minds its own
business, the solvers associated with the individual
state variables can be easily distributed over multi-
ple cores. This needs to be done intelligently in or-
der to minimize inter-processor communication, but
QSS solvers don’t suffer the fate of the centralized
stiff system solvers of the past. This is a research area
that we are currently exploring (Ph.D. dissertation of
Xenofon Floros to be defended in 2013). In order to
solve the synchronization problem between the differ-
ent simulators running on separate cores, each state
variable synchronizes its own simulation clock with a
scaled version of the real-time clock, but the different
SIMULTECH2013-3rdInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
IS-10
simulators don’t synchronize their individual clocks
among each other (Bergero et al., 2013).
Recently, first versions of stiff QSS solvers have
been developed (Migoni et al., 2013; Migoni et al.,
2012). Although stiff QSS solvers are necessarily im-
plicit, they do not require Newton iteration. The rea-
son is that the next state is known in advance. If the
current state has a value of x
i
, the next state will as-
sume a value of x
i
± δx
i
. Thus in the worst case, we
need to explore two alternatives. The average simu-
lation step of a stiff QSS solver therefore consumes
only about 20% more real time in comparison with its
non-stiff twin.
These solvers nevertheless still carry the risk of
a combinatorial explosion, because transitions in one
state may trigger immediate transitions in other vari-
ables as well. Thus in the worst case, we may need to
check for 2
n
potential next states. To prevent combi-
natorial explosion, the current generation of stiff QSS
solvers has been implemented in a linearly implicit
fashion only (Migoni et al., 2013). This often works
well, but if the stiffness of the model is not caused by
large and small elements on the diagonal of the Hes-
sian matrix, e.g. in discretized parabolic PDEs, the
“stiff QSS solvers lose their stiffness property, and
therefore, stiff QSS solvers are not yet as robust as
e.g. DASSL. More research in this area is still needed.
Distribution over multiple cores may help also in this
respect. Since the different columns of the Hessian
matrix can be computed in parallel, it should be possi-
ble to also limit the combinatorial explosion in similar
ways.
First results of simulating large-scale models us-
ing stiff QSS solvers were recently published (Berg-
ero et al., 2012). We simulated a long transmission
line (not a stiff system) with a load that made the
overall system very stiff. Thus, Dymola, using cen-
tralized simulation, has no choice but to make use of
a stiff system solver on all state variables and suffers
the consequences of a large Hessian matrix. The stiff
QSS solver simulated this system 1000 times faster
than Dymola, even without exploiting the multi-core
architecture, which is the type of performance that we
need when proceeding from device level to system
level simulation.
First simulation results using parallel implemen-
tations of stiff QSS solvers distributed over a multi-
core architecture showed very promising results as
well (Bergero et al., 2013). Parallel implementations
of QSS solvers scale very well. Whereas most par-
allel implementations of dynamic system simulations
saturate quickly as more cores are being added (due
to the overhead associated with inter-processor com-
munication), we did not notice any saturation effects
yet when distributing the transmission line simulation
over 12 parallel processes (six cores operating each in
an interleaved mode).
In fact, the speed-up was even slightly faster than
linear, because the overhead associated with manag-
ing the event queue currently grows quadratically in
the number of events being managed (it could grow
with n· log(n), but the event handler has not yet been
implemented in an optimal fashion in the current re-
lease of the tool). As each processor carries its own
event queue, the average length of the individualevent
queues is smaller in the parallel setup, which explains
the faster-than-linear speed-up.
QSS solvers are also more efficient than classi-
cal solvers in handling discontinuities as they do not
require an iteration to localize event times. Power-
electronic circuits with high frequency switching sim-
ulate currently about 20 times faster using QSS
solvers than using DASSL. However,also this feature
will become more important when proceeding from
device level simulation to system level simulation.
Event times (discontinuities) can be assumed to occur
in a statistically independent fashion. Therefore we
can assume that a model that is ten times as big fea-
tures on average about ten times as many discontinu-
ities per time unit. Thus, the event density grows lin-
early with the size of the model. The execution time
of classical solvers grows quadratically with the event
density, whereas that of QSS solvers grows only lin-
early with the number of events per time unit. This is
true even for non-stiff models using non-stiff solvers
for their simulation (Grinblat et al., 2012).
For all of the above reasons, we expect that sys-
tem simulation tools of the future will employ decen-
tralized multi-core implementations of QSS solvers
as their default simulation algorithms, but the QSS
solvers need to become more robust before this can
happen.
9 CONCLUSIONS
In this article, it was shown that the demands on
a modeling and simulation environment vary widely
with the complexity of the systems to be modeled and
the size of the models to be simulated. Modern M&S
environments are useful for modeling devices and for
simulating device level models, but they are not yet
useful for modeling complex systems and for simu-
lating system level models. A series of avenues were
sketched outlining how some of the shortcomings of
current M&S environments may be overcome.
TheComplexityCrisis
IS-11
ACKNOWLEDGMENTS
This research has in part been funded by CTI grant
No. 12101.1;3 PFES-ES and supported by the
OPENPROD-ITEA2 project. The support is grate-
fully acknowledged.
REFERENCES
Bergero, F., Floros, X., Fernandez, J., Kofman, E., and
Cellier, F. (2012). Simulating Modelica Models with
a Stand-alone Quantized State Systems Solver. In
Proceedings 9th International Modelica Conference,
pages 237–246, Munich, Germany.
Bergero, F., Kofman, E., and Cellier, F. (2013). A Novel
Parallelization Technique for DEVS Simulation of
Continuous and Hybrid Systems. Simulation.
Braun, W., Gallardo-Yances, S., Link, K., and Bachmann,
B. (2012). Fast Simulation of Fluid Models with
Colored Jacobians. In Proceedings 9th International
Modelica Conference, pages 247–252, Munich, Ger-
many.
Cellier, F. (1979). Combined Continuous/Discrete System
Simulation by Use of Digital Computers: Techniques
and Tools. PhD thesis, Swiss Federal Institute of Tech-
nology.
Cellier, F. (1991). Continuous System Modeling. Springer–
Verlag, New York, NY, USA.
Cellier, F. and Elmqvist, H. (1993). Automated Formula
Manipulation Supports Object–oriented Continuous
System Modeling. IEEE Control Systems, 13(2):28–
38.
Cellier, F. and Kofman, E. (2006). Continuous System Sim-
ulation. Springer–Verlag, New York, NY, USA.
Cellier, F., Schooley, L., Zeigler, B., Doser, A., Far-
renkopf, G., Kim, J., Pan, Y., and Williams, B. (1992).
Watchdog Monitor Prevents Martian Oxygen Produc-
tion Plant from Shutting Itself Down During Storm.
In Proceedings International Symposium on Robotics
and Manufacturing, pages 697–704, Santa Fe, NM,
USA.
de Albornoz, A. (1996). Inductive Reasoning and Re-
construction Analysis: Two Complementary Tools for
Qualitative Fault Monitoring of Large–scale Systems.
PhD thesis, Polytechnical University of Barcelona.
Elmqvist, H. (1978). A Structured Model Language for
Large Continuous Systems. PhD thesis, Lund Institute
of Technology.
Elmqvist, H., Cellier, F., and Otter, M. (1993). Object–
oriented Modeling of Hybrid Systems. In Proceed-
ings ESS’93, European Simulation Symposium, pages
xxxi–xli, Delft, The Netherlands.
Elmqvist, H. and Otter, M. (1994). Methods for Tear-
ing Systems of Equations in Object–oriented Model-
ing. In Proceedings European Simulation Multicon-
ference, pages 326–332, Barcelona, Spain.
Escobet, A., Nebot, A., and Cellier, F. (1999). Model Ac-
ceptability Measure for the Identification of Failures
in Qualitative Fault Monitoring Systems. In Proceed-
ings European Simulation Multi-Conference, pages
339–347, Warsaw, Poland.
Grinblat, G., Ahumada, H., and Kofman, E. (2012). Quan-
tized State Simulation of Spiking Neural Networks.
Simulation, 88(3):299–313.
Hairer, E. and Wanner, G. (1999). Stiff differential equa-
tions solved by Radau methods. Journal of Computa-
tional and Applied Mathematics, 111(1-2):93–111.
Kofman, E. (2003). Quantization-based Simulation of Dif-
ferential Algebraic Equation Systems. Simulation,
79(7):363–376.
Link, K., Steuer, H., and Butterlin, A. (2009). Deficien-
cies of Modelica and its Simulation Environments for
Large Fluid Systems. In Proceedings 7th Interna-
tional Modelica Conference, pages 341–344, Como,
Italy.
Mattsson, S., Elmqvist, H., and Broenink, J. (1997). Mod-
elica An International Effort to Design the Next
Generation Modeling Language. Journal A, Benelux
Quarterly Journal on Automatic Control, 38(3):16–
19.
Mattsson, S., Olsson, H., and Elmqvist, H. (2000). Dy-
namic Selection of States in Dymola. In Proceedings
Modelica Workshop, pages 61–67, Lund, Sweden.
Migoni, G., Bortolotto, M., Kofman, E., and Cellier, F.
(2013). Linearly Implicit Quantization-based Inte-
gration Methods for Stiff Ordinary Differential Equa-
tions. Simulation Modelling Practice and Theory.
Migoni, G., Kofman, E., and Cellier, F. (2012).
Quantization-based New Integration Methods for Stiff
ODEs. Simulation, 88(4):387–407.
Mitchell, E. and Gauthier, J. (1986). ACSL: Advanced Con-
tinuous Simulation Language – User Guide and Refer-
ence Manual. Mitchell & Gauthier Assoc., Concord,
MA, USA.
Modelica Association, . (2007). The Modelica Language
Specification Version 3.0. Technical report.
Modelica Association, . (2008). The Modelica Standard Li-
brary. Technical report.
Olsson, H., Otter, M., Mattsson, S., and Elmqvist, H.
(2008). Balanced Models in Modelica 3.0 for In-
creased Model Quality. In Proceedings 6th Interna-
tional Modelica Conference, pages 21–33, Bielefeld,
Germany.
Otter, M., Elmqvist, H., and Mattsson, S. (1999). Hybrid
Modeling in Modelica Based On Synchronous Data
Flow Principle. In Proceedings IEEE, International
Symposium on Computer Aided Control System De-
sign, pages 151–157, Kohala Coast, Hawaii.
Pantelides, C. (1988). The Consistent Initialization of
Differential–Algebraic Systems. SIAM Journal of Sci-
entific and Statistical Computing, 9(2):213–231.
Petzold, L. (1983). A Description of DASSL: A Dif-
ferential/Algebraic Equation Solver. In Stepleman,
R., editor, Scientific Computing, pages 65–68. North–
Holland, Amsterdam, The Netherlands.
Pop, A., Sj¨olund, M., Asghar, A., Fritzson, P., and Casella,
F. (2012). Static and Dynamic Debugging of Modelica
Models. In Proceedings 9th International Modelica
Conference, pages 443–454, Munich, Germany.
SIMULTECH2013-3rdInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
IS-12
Ryan, R. (1990). ADAMS Multibody System Analysis
Software. In Schiehlen, W., editor, Multibody Systems
Handbook, pages 361–402. Springer-Verlag.
Sen, S., Moha, N., Baudry, B., and J´ez´equel, J. (2009).
Meta–model Pruning. In Model Driven Engineer-
ing Languages and Systems, pages 32–46. Springer–
Verlag, Berlin and Heidelberg, Germany.
Sj¨olund, M., Braun, R., Fritzson, P., and Krus, P. (2010).
Towards Efficient Distributed Simulation in Model-
ica Using Transmission Line Modeling. In Proceed-
ings 3rd International Workshop on Equation-based
Object-oriented Languages and Tools, pages 71–80,
Oslo, Norway.
Tuinenga, P. (1995). Spice: A Guide to Circuit Simulation
and Analysis Using PSpice - Third Edition. Prentice
Hall, Saddle River, NJ, USA.
Zimmer, D. (2010). Equation–based Modeling of Variable–
structure Systems. PhD thesis, Swiss Federal Institute
of Technology.
BRIEF BIOGRAPHY
Franc¸ois E. Cellier received his
BS degree in electrical engineer-
ing in 1972, his MS degree in
automatic control in 1973, and
his PhD degree in technical sci-
ences in 1979, all from the Swiss
Federal Institute of Technology
(ETH) Zurich. Dr. Cellier
worked at the University of Arizona as professor of
Electrical and Computer Engineering from 1984 until
2005. He returned to his home country of Switzer-
land and his alma mater in the summer of 2005. Dr.
Cellier’s main scientific interests concern modeling
and simulation methodologies, and the design of ad-
vanced software systems for simulation, computer-
aided modeling, and computer-aided design. Dr. Cel-
lier has authored or co-authored more than 200 tech-
nical publications, and he has edited several books.
He published a textbook on Continuous System Mod-
eling in 1991 and a second textbook on Continu-
ous System Simulation in 2006, both with Springer-
Verlag, New York. He is a fellow of the Society for
Modeling and Simulation International (SCS).
TheComplexityCrisis
IS-13