2 MODELLING THE
COMMUNICATION PROCESS
Let us consider communication between a
conversational agent A and its partner B (another
conversational agent or human user). The process is
defined if the following is given (Koit et al., 2009):
1) set G of communicative goals where both
participants choose their own initial goals (G
A
and
G
B
, respectively). In our case , G
A
= “B makes a
decision to do D“
2) set S of communicative strategies of the
participants. A communicative strategy is an
algorithm which a participant uses for achieving
his/her communicative goal. This algorithm
determines the activity of a participant at each
communicative step
3) set T of communicative tactics, i.e. methods of
influencing the partner. For example, A can entice,
persuade, or threaten B in order to achieve its goal
G
A
4) set R of reasoning models which is used by
participants when reasoning about an action D. A
reasoning model is an algorithm the result of which
is a positive or negative decision about the object of
reasoning (in our case, an action D)
5) set P of participant models, i.e. a participant’s
depiction of himself/herself and his/her partner:
P = {P
A
(A), P
A
(B), P
B
(A), P
B
(B)}
6) set of world knowledge
7) set of linguistic knowledge.
2.1 Reasoning Model
The reasoning process of a subject who should make
a decision, to perform an action D or not (in our
case, B), consists of a sequence of steps where the
resources, positive and negative aspects of D will be
weighed. Partner (A) cannot take part in this
reasoning process explicitly. (S)he can direct the
reasoning of B only by giving information about
certain aspects of D, by stressing the positive aspects
of D and downgrading the negative aspects. Positive
aspects are pleasantness and usefulness of doing D
for B but also punishment for not doing D if D is
obligatory. Negative aspects are unpleasantness and
harmfulness of doing D and punishment for doing D
if D is prohibited.
The reasoning model consists of two parts: 1) a
model of human motivational sphere; 2) reasoning
schemes. We represent the model of motivational
sphere of a subject by the following vector of
weights assigned by him/her to different aspects of
an action:
w = (w(resources), w(pleasant), w(unpleasant),
w(useful), w(harmful), w(obligatory), w(prohibited),
w(punishment-for-doing-a-prohibited-action),
w(punishment-for-not-doing-an-obligatory-action)).
In the description, w(pleasant), etc. means
weight of pleasant, etc. aspects of D. Such a vector
(w
AB
) is used by A as the partner model P
A
(B). The
weights of the aspects of D are A’s beliefs about B.
When interacting, A is making changes in the partner
model if needed.
The second part of the reasoning model consists
of reasoning schemes that supposedly regulate
human action-oriented reasoning. A reasoning
scheme represents steps that the agent goes through
in its reasoning process; these consist in computing
and comparing the weights of different aspects of D;
and the result is the decision to do or not to do D (cf.
Koit and Õim, 2004). In the motivational sphere
three basic factors that regulate reasoning of a
subject concerning D are differentiated. First,
subject may wish to do D, if pleasant aspects of D
for him/her overweigh unpleasant ones; second,
subject may find reasonable to do D, if D is needed
to reach some higher goal, and useful aspects of D
overweigh harmful ones; and third, subject can be in
a situation where (s)he must (is obliged) to do D – if
not doing D will lead to some kind of punishment.
We call these factors wish-, needed- and must-
factors, respectively. They trigger the reasoning
procedures wish, needed and must, respectively.
It is supposed here that the dimensions
pleasant/unpleasant, useful/harmful, etc. have
numerical values and that in the process of reasoning
(weighing the pro- and counter-arguments) these
values can be summed up.
In general this reasoning model follows the ideas
of the Belief-Desire-Intention model (Allen, 1994).
2.2 Reasoning in Interaction
In the goal base of one participant (the
conversational agent A) a goal G
A
gets activated. A
checks the partner model – supposed weights of the
aspects of D. Then A chooses tactics of influencing
of B (e.g. to persuade B, i.e. to stress the usefulness
of D). Therefore, the agent sets up a sub-goal – to
trigger in B a certain reasoning process (in case of
persuading, by the needed-factor). A plans the
dialogue acts and determines their verbal form as the
first turn tr
1
. This turn triggers a reasoning process
in B where two types of procedures should be
distinguished: the interpretation of A’s turn tr
1
and
the generation of B’s response tr
2
. The turn tr
2
triggers in A the reasoning cycle, A builds a new turn
tr
3
. Dialogue comes to an end, when A has reached
KEOD 2011 - International Conference on Knowledge Engineering and Ontology Development
376