the obligation of paying in intention of paying (moti-
vational layer); the third rule checks the affordance
related to the intent (intentional layer) and, if this
evaluation is positive, performs the paying action (ac-
tion layer). The action is then externalized to a com-
munication module of the agent, interacting with the
world/environment, which in turn will generate the
actual consequence (signal layer).
To conclude, we observe that in this description
the belief of having paid can be only partially aligned
with the ontological reality. From the perspective of
the agent, if the action has not failed, it is natural
to think that it has been successful. In reality, how-
ever, something may block the correct transmission
of the act to its beneficiary (e.g. a failure in the bank
databases). This extent of such alignment is related
to the focus of the feedback process checking the per-
formance.
5 DISCUSSION
The modeling exercise running through the paper
served as an example of operational application of a
knowledge acquisition methodology targeting socio-
institutional scenarios. Each representation we con-
sidered (MSC, topology, Petri net, AgentSpeak(L)
code) has shown its weakness and strengths in this re-
spect. Furthermore, the cross-relations between them
are not simple isomorphisms. Despite of these diffi-
culties, we think that using alternative visualizations
is a way to achieve a more efficient elicitation (target-
ing also non-IT experts). In this line of thought, we
plan to implement and assess an integrated environ-
ment for knowledge acquisition; the scalability of the
methodology should be supported by the introduction
of an adequate subsumption relation between stories,
allowing faster elicitation of models.
From a higher-level perspective, the present work
connects scenario-based (or case-based) modeling
with multi-agent systems technologies. The idea at
the base is that, in order to acquire representations of
social behaviours, we need cases to be valid models,
and we can validate them by their execution.
(Mueller, 2003) observes that, although several
story understanding programs—starting from BORIS
(Charniak, 1972)—have used sort of multi-agent sys-
tems for their internal representation, this choice is
not easy for the programmer: such agents are difficult
to write, maintain, and extend, because of the many
potential interactions. His experience matched with
ours. However, we think that the connection of agent-
based modeling with MAS is too strong and impor-
tant to be easily discarded. As longer-term objective,
we aim to couple on the same simulation framework
designed systems (e.g. IT infrastructures) and repre-
sentations of known social behaviours.
Scenario-based Modeling. MSCs (and collections
of them, e.g. HMSCs) were standardized as support
for the specification of telecommunication software,
in order to capture system requirements and to collect
them in meaningful wholes (Harel and Thiagarajan,
2004). Later on, other extensions, like LSCs (Damm
and Harel, 2001) and CTPs (Roychoudhury and Thi-
agarajan, 2003), were introduced to support the auto-
matic creation of executable specifications. The basic
idea consists in collecting multiple inter-object inter-
actions and synthesizing them in intra-object imple-
mentations. In principle, we share part of their ap-
proach. Our work promotes the idea of using MSCs,
although integrated with intentional concepts. How-
ever, in their case, the target is a specific closed sys-
tem (to be implemented), while in our case, a sce-
nario describes an existing behavioural component of
an open social system. At this point, we are satisfied
by transforming the MSC of a single case in the corre-
spondent agent-roles descriptions. The superposition
of scenarios, with the purpose of associating them into
the same agent-role, is an open research question.
Story Understanding. AI started investigating sto-
ries in the ’70s, with the works of (Charniak, 1972),
(Abelson and Schank, 1977), introducing concepts
like scripts, conceptual dependency (Lytinen, 1992),
plot units (Lehnert, 1981). The interest towards this
subject diminished in the early ’80s, leaking into other
domains. (Mateas and Sengers, 1999) and others tried
a refocus in the end of ’90s, introducing the idea of
Narrative Intelligence, but again, the main stream of
AI research changed, apart from the works of Mueller
(e.g. (Mueller, 2003)). All these authors, however,
are mostly interested in story understanding. We are
investigating instead the steps of construction of what
they called script (Abelson and Schank, 1977). Ac-
cording to our perspective, common-sense is not con-
structed once, in a script-like knowledge, but emerges
as a repeated pattern from several representations.
Furthermore, we explicitly aim to take account of the
integration of fault and non-compliant behaviours, in-
creasing the “depth of field” of the representation.
Computational Implementation. Reproducing a
system of interacting subsystems needs concurrency.
Models of concurrent computation, like the Actor
model (Hewitt et al., 1973), are implemented today in
many development platforms. In our story-world, this
solution would be perfect for objects. We would need
instead to add intentional and institutional elements in
ICAART2014-InternationalConferenceonAgentsandArtificialIntelligence
630