On the other hand, some environments focus on
the simulation part by opting for the “sandbox” ap-
proach. They let the user act freely as the simulation
evolves and reacts to their actions (Shawver, 1997).
In these environments, the only pedagogical control is
that of the initial state of the world. However, without
any real-time pedagogical control, the efficiency of
the training is not guaranteed. The simulation could
go in any direction, yet we would want it to be rele-
vant to the profile and current state of the trainee.
One approach for ensuring both user agency and
pedagogical control is to define a multilinear graph
of all possible scenarios. In (Delmas et al., 2007),
the set of possible plots is thus explicitely modelized
through a Petri Network. However, when the com-
plexity of the situations scales up, it becomes difficult
to predict all possible courses of actions. Especially
when the training aims towards difficult coactivity sit-
uations, the decision making processes and emotions
expressed by the virtual characters have to be believ-
able, and therefore are often based on complex psy-
chological models. In this case, it becomes impossi-
ble to foresee all possible combinations, and the vir-
tual characters have to be given some autonomy in
order for the scenarios to emerge from their actions.
To combine autonomous characters and a global
scenario control is however fundamentally problem-
atic: the controlling entity cannot influence au-
tonomous characters behaviour unless they provide
specific “hooks”. And indeed, most of the environ-
ments that include complex, emotional characters,
provide only semi-autonomous characters, like in
Scenario Adaptor (Niehaus and Riedl, 2009). These
characters can be given orders, whether at behavioural
level, or on a higher, motivational level. The main
weakness of this approach is that nothing ensures that
the global behaviour of the characters will stay co-
herent. Yet coherence, especially in training environ-
ments, is essential to maintain to ensure the user’s un-
derstanding of what is going on, as shown by (Si et al.,
2010).
Few systems combine a global control of the
simulation with the possibility of new situations to
emerge from user actions or characters autonomous
behaviour, all the while ensuring their coherence. An
attempt to unite these different aspects has been made
in Thespian (Si et al., 2009), by computing characters
motivation at the start of the simulation so that the
events would unfold according to an human-authored
plot. However, this system doesn’t allow dynamic
scenario adaptation, in that we would like not to have
a predefined plot but one that changes in real-time ac-
cording to what learning situations are considered rel-
evant in line with the user’s activity.
3 PROPOSITION
3.1 Approach
As we aim to train to complex work situations
with notable human-factors component, we adopt a
character-based approach, using autonomous cogni-
tive characters in order for such situations to emerge
from both their interactions and those of the learner.
We propose a scenario adaptation module called
SELDON (ScEnario and Learning situations adapta-
tion through Dynamic OrchestratioN) that aims to en-
sure a pedagogical control over a complex simulation,
without restraining the emergence of new situations
or disturbing the coherence of objects or characters’
behaviours. Our model lets the user act freely, and in-
directly adjusts the unfolding of events. The scenario
adaptation occurs not only at the start of the simula-
tion but during its course, by dynamically generating
learning situations that would be relevant to learner’s
profile and activity traces, then altering the scenario
in real-time to guide him towards these situations.
SELDON is composed of two modules: TAILOR
and DIRECTOR. TAILOR produces learning situa-
tions and constraints over the global scenario based
on the current state of the learner (Carpentier et al.,
2013). This paper focuses on DIRECTOR, which is
in charge of generating a scenario respecting these
constraints. The global scenario adaptation process
is described in Figure 1, here shown whithin the HU-
MANS platform (Carpentier et al., 2013).
Figure 1: System architecture.
DynamicScenarioAdaptationBalancingControl,CoherenceandEmergence
233