Changing of Participants’ Attitudes in
Argument-based Negotiation
Mare Koit
Institute of Computer Science, University of Tartu, Liivi 2, Tartu, Estonia
Keywords: Negotiation, Argument, Dialogue Model, Reasoning, Attitude, Knowledge Representation.
Abstract: We are modelling argument-based negotiation where the initiator is convincing the partner to do an action.
The initiator is using a partner model which evaluates the hypothetical attitudes of the partner related to the
action under consideration. The partner when reasoning operates with an actual model the actual attitudes
which still are hidden from the initiator. Both models are changing during negotiation as influenced by the
presented arguments. The choice of an argument by a negotiation participant depends, on one hand, on the
attitudes related to the action, and on the other hand, on the result of reasoning based on these attitudes. The
paper studies how the participants are changing their attitudes during a dialogue. A human-human dialogue
illustrates the results of the analysis of a small dialogue corpus. A limited version of the model is
implemented on the computer.
1 INTRODUCTION
Negotiation is a form of interaction in which a group
of agents, with a desire to cooperate but with
potentially conflicting interests try to come to a
mutually acceptable division of a scarce resource or
resources (Rahwan et al., 2003). Negotiation
dialogues are aimed at reaching an agreement
between participants when there is a perceived
divergence of interest. However, the participants are
also cooperative, at least to the extent that they are
willing to enter into joint interaction to agree on a
division of the resource at issue (DeVault et al.,
2015).
Argumentation-based negotiation is the process
of decision-making through the exchange of
arguments (Lewis et al., 2017).
According to Scherer’s typology, attitudes are
relatively enduring, affectively coloured beliefs,
preferences, and predispositions towards objects or
persons (Scherer, 2000). Attitude is a psychological
tendency that is expressed by evaluating a particular
entity with some degree of favour or disfavour.
Attitudes refer to people’s evaluations of entities in
their world. Attitudes often serve a key mediational
role in behaviour change i.e., attitude change can
mediate the impact of some influence treatment on
behavioural compliance (Petty and Briñol, 2015).
Our aim is to develop a dialogue system (DS)
which interacts with the user in a natural language
following norms and rules of human
communication. For that reason, we study human-
human spoken dialogues. We have worked out a
formal model of negotiation dialogue (Koit and
Õim, 2014; Koit, 2018a) which includes a reasoning
model and communicative strategies applied by the
participants in order to achieve their communicative
goals. Two kinds of attitudes of communication
participants have been introduced in the model
respectively, related to a communication partner and
related to a negotiation object, in our case, an action
(Koit, 2018b).
Communicative space has been determined in
order to model the attitudes related to
communication partners. In the current paper, we
will concentrate on reasoning of communication
participants and accordingly, on their attitudes
related to a negotiation object. We analyse human-
human argumentation dialogues where the initiator
is convincing her partner to make a decision to do an
action, i.e. the negotiation object is an action. We
study how the participants can influence each other
when negotiating and how they change their
attitudes related to the action under consideration
using arguments. The only possibility is to analyse
the wording of the people not the insight.
770
Koit, M.
Changing of Participants’ Attitudes in Argument-based Negotiation.
DOI: 10.5220/0007472807700777
In Proceedings of the 11th International Conference on Agents and Artificial Intelligence (ICAART 2019), pages 770-777
ISBN: 978-989-758-350-6
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All r ights reserved
The remainder of the paper is structured as
follows. Section 2 introduces our dialogue model
which includes a reasoning model about doing an
action. Attitudes of a communication participant in
relation to the action will be represented as
coordinates of the vector of motivational sphere of
the reasoning subject. The attitudes are changing in
dialogue as influenced by the arguments of
communication participants. So far, we have applied
the reasoning model on simple artificial dialogues.
In this paper, we are aiming to evaluate the model on
actual human-human dialogues. For that, we analyse
a dialogue corpus which will be introduced in
Section 3. Section 4 presents a case study a
dialogue example from the corpus which
demonstrates how the reasoning model describes
changing of the attitudes related to the negotiation
object. Section 5 discusses the reasoning model and
introduces the implemented DS. Section 6 draws
conclusions.
2 DIALOGUE MODEL
We are modelling negotiations between two
participants A and B in a natural language. One of
them (let it be A) initiates the dialogue by requesting
her partner B to agree to do an action D. If B refuses
then A in negotiation tries to influence him by
presenting various arguments for doing D. The
arguments are based on the partner model the
image A has about B’s attitudes related to different
aspects of the action D. The partner B, in his turn,
may present counterarguments based on his actual
attitudes. The counterarguments show which beliefs
of A about B’s attitudes were wrong and therefore,
how A has to change her partner model. The
dialogue finishes with B’s decision: to do D or not.
Depending on the decision, A either has achieved or
not her initial communicative goal.
2.1 Reasoning Model
After A has made a proposal or request to the partner
B to do the action D, B can respond with agreement
or rejection, depending on the result of his
reasoning. Rejection can be (but not necessarily)
supported with an argument against doing D. These
arguments can be used by A as giving information
about the reasoning process that brought B to his
decision. Therefore, a reasoning model should be
included into the dialogue model.
There are various formal approaches to
reasoning, e.g. the Elaboration Likelihood Model
(Cacioppo et al., 1986; Petty et al., 2018), Social
Judgment Theory, Social Impact Theory, etc.
Nevertheless, we use a naïve, ‘folk’ theory in our
reasoning model (D’Andrade, 1987; Davies and
Stone, 1995; Õim, 1996). Our model is based on the
studies in the common-sense conception of how the
human mind works in such situations. The general
principles of the model are analogous to the BDI
(Belief-Desire-Intention) model (Grosz and Sidner,
1986; Allen 1995; Boella and van der Torre, 2003)
but it has some specific traits (cf. Koit and Õim,
2014).
First, along with desires we also consider other
kinds of motivational inputs for creating the
intention to do an action in a reasoning subject (e.g.
whether the subject considers the action pleasant or
useful to him/her or s/he is forced to do it
independent on his/her immediate wish).
Secondly, we suppose that people start, as a rule,
from this conception, not from any consciously
chosen scientific one. We want to model a naïve
‘theory’ that people themselves use when they are
interacting with other people and trying to change
their attitudes, to predict and influence their
decisions.
Our reasoning model consists of two parts
including a model of human motivational sphere,
and reasoning procedures.
(1) We represent the model of motivational
sphere of a reasoning subject by the following
vector of attitudes related to the reasoning object
the action D:
w
D
= (w(resources
D
), w(pleasant
D
),
w(unpleasant
D
), w(useful
D
), w(harmful
D
),
w(obligatory
D
), w(punishment-not
D
), w(prohibited
D
),
w(punishment-do
D
)).
We suppose in our model that the attitudes have
numerical values (weights). Here w(pleasant
D
), etc.
mean the weight of pleasant, etc. aspects of D;
w(punishment-not
D
) the weight of the punishment
for not doing D if it is obligatory; w(punishment-
do
D
) the weight of the punishment for doing D if it
is prohibited. Further, w(resources
D
) = 1 if the
subject has all the resources necessary to do D
(otherwise 0); w(obligatory
D
)/w(prohibited
D
) = 1 if
D is obligatory/prohibited for the reasoning subject
(otherwise 0). The values of other weights can be
non-negative natural numbers on the scale from 0 to
10.
Some comments are necessary here. Definitely,
people do not operate with numerical weights in
their reasoning. Instead, they rather use words of a
natural language to characterize the attitudes. For
Changing of Participants’ Attitudes in Argument-based Negotiation
771
example, the pleasantness of an action can be
evaluated by such words and expressions as
excellent, very pleasant, etc. Still, the words can
approximately be represented on a numerical scale.
Instead, fuzzy logic can be used. Further, the aspects
of actions considered here are not fully independent.
For example, harmful consequences of an action as a
rule are unpleasant for a subject (but unpleasant will
not always be harmful). However, we do not assume
the independence of the aspects in the reasoning
process.
(2) The second part of the reasoning model
consists of reasoning procedures that regulate, as we
suppose, human action-oriented reasoning.
According to our model, the reasoning process
can be triggered by three main types of determinants
wish-, needed- and must-determinant (Õim, 1996).
The process itself consists of a sequence of steps
where such aspects participate as resources of the
reasoning subject for doing D, positive aspects of D
or its consequences (pleasantness, usefulness, and
also punishment for not doing D if it is obligatory),
and negative aspects (unpleasantness, harmfulness,
and punishment for doing D if it is prohibited).
There are three reasoning procedures in our
model (WISH, NEEDED, and MUST) which
depend on the determinant that triggers the
reasoning. Each procedure represents the steps that a
subject goes through in the reasoning process when
comparing and summarizing weights of different
aspects of D, and the result is the decision: to do D
or not.
The reasoning procedures include some
principles which represent the interactions between
the determinants, e.g.
•people want pleasant states and do not want the
unpleasant ones
•if the sum of the values of the internal (wish- and
needed-) determinants and the value of the external
(must-) determinant appear equal in a situation then
the decision suggested by the internal determinants
is preferred.
As an example, let us present the reasoning
procedure WISH as a step-form algorithm in Fig. 1
triggered by the wish of the reasoning subject to do
D, that is, D is not less pleasant than unpleasant for
the subject, cf. (Koit, 2016). Here we do not indicate
the action D which remains the same during the
reasoning.
If D is not less useful than harmful then the
reasoning procedure NEEDED can be triggered by
the reasoning subject. Finally, if D is obligatory then
the subject can trigger the reasoning procedure
MUST, cf. (Koit and Õim, 2014). When reasoning, a
subject applies the procedures in a certain order as
motivated by the internal or external determinants.
First of all, s/he tries to apply the procedure WISH.
If it is impossible (the presumption is not fulfilled)
or it gives the decision “do not do D then the
subject applies the procedure NEEDED and finally,
the procedure MUST, until the decision (do D or
not) is achieved (Koit and Õim, 2014).
Presumption: w(pleasant)
w(unpleasant).
1) Is w(resources) = 1? If not then go to 11.
2) Is w(pleasant) > w(unpleasant) +
w(harmful)? If not then go to 6.
3) Is w(prohibited) = 1? If not then go to 10.
4) Is w(pleasant) > w(unpleasant) + w(harmful)
+ w(punishment-do)? If yes then go to 10.
5) Is w(pleasant) + w(useful) > w(unpleasant) +
w(harmful) + w(punishment-do)? If yes then go to
10 else go to 11.
6) Is w(pleasant) + w(useful)
w(unpleasant) +
w(harmful)? If not then go to 9.
7) Is w(obligatory) = 1? If not then go to 11.
8) Is w(pleasant) + w(useful) + w(punishment-
not) > w(unpleasant) + w(harmful)? If yes then go
to 10 else go to 11.
9) Is w(prohibited) = 1? If yes then go to 5.
10) Decide: do D. End.
11) Decide: do not do D.
Figure 1: The reasoning procedure WISH.
We use two vectors of attitudes w
B
D
and w
AB
D
in our dialogue model. Here w
B
D
is the model of
motivational sphere of B who has to make a decision
about doing D; the vector includes B’s (actual)
evaluations of D’s aspects and it is used by B when
he is reasoning about doing D. The other vector w
AB
D
is the partner model which includes A’s beliefs
concerning B’s attitudes (the hypothetical
evaluations) and it is used by A when she is planning
her next turn in dialogue. Both models w
AB
D
and w
B
D
are changing as influenced by the arguments
presented by the participants in negotiation.
2.2 Communicative Strategies and
Tactics
A communicative strategy is an algorithm used by a
participant for achieving his/her communicative goal
(Koit and Õim, 2014; Koit, 2018a). The initiator A
when having a communicative goal to convince B to
make a decision to do D can realize her
communicative strategy in different ways, e.g. she
can entice, persuade or threaten the partner B to do
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
772
D. Respectively, she stresses the pleasantness or
usefulness of doing D or punishment for not doing D
if it is obligatory. We call these ways of realization
of a communicative strategy communicative tactics.
B similarly applies his communicative strategy
through related communicative tactics. Some
algorithms are presented in (Koit, 2018a).
The initiator A chooses a suitable communicative
strategy and the communicative tactics in order to
direct B’s reasoning to the desirable decision. When
trying to influence B to make the pursued decision
(do the action D) and to change his initial attitudes
(the model w
B
D
), A uses a partner model w
AB
D
. A
stresses the positive and downgrades the negative
aspects of the action. Various arguments for
doing/not doing D will be presented by the
participants in a systematic way. While enticing
(respectively, persuading or threatening) the partner
B for doing D, A attempts to trigger the reasoning
procedure WISH (respectively, NEEDED or MUST)
in B’s mind (Koit and Õim, 2014).
3 EMPIRICAL MATERIAL
The current study is based on the Estonian dialogue
corpus (Hennoste et al., 2008). The main part of the
corpus is formed by transcripts of human-human
dialogues recorded in authentic situations. Among
them are phone calls (travel negotiations,
telemarketing calls, directory inquiries, etc.) as well
as face-to-face conversations, in total 1056
transliterated texts (206,485 tokens).
For this study, a small sub-corpus consisting of
five everyday phone calls between acquaintances
has been chosen from the corpus. In the dialogues,
participants are negotiating about doing an action by
one of them. We will consider how the participants
are reasoning in order to make their decisions about
the action and how they are influencing the partner
to change his/her attitudes related to the action.
However, direct access to their minds is impossible.
Instead, we can make conclusions only by analysing
their utterances a dialogue (text).
In order to describe the reasoning processes of
the participants, we use the models w
AB
D
and w
B
D
,
and the reasoning procedures introduced in the
previous section. We are wondering how well the
models describe authentic human-human dialogues
and whether they can be used when developing a DS
which interacts with the user like a human.
The initiator A, starting a dialogue, generates a
partner model w
AB
D
(using her preliminary
knowledge) and determines the communicative
tactics T
A
which she will use (e.g. enticement), i.e.
she accordingly fixes a reasoning procedure R
A
which she aims to trigger in B’s mind (e.g. WISH).
A applies the reasoning procedure in her partner
model, in order to ‘put herselfinto B’s role and to
choose suitable arguments when convincing B to
make a decision to do D.
B has his own model the vector w
B
D
(the exact
values of which coordinates A does not know
similarly like w
AB
D
is not directly accessible for B).
He in his turn determines a reasoning procedure R
B
which he will use in order to make a decision about
doing D (the procedure can be different from R
A
fixed by A) and his communicative tactics T
B
.
4 CHANGING THE ATTITUDES:
A CASE STUDY
Let us consider an example from our analysed sub-
corpus in order to demonstrate how both models of
motivational sphere are used in a reasoning process
and how the attitudes of participants captured in the
models are changing.
The following dialogue is a transcript of a phone
call of mother (participant A) to her son (participant
B). A makes a proposal to B to bake gingersnaps (the
action D) and presents a lot of arguments during the
dialogue in order to produce/increase B’s wish to do
the action, until B finally agrees. Transcription of
Conversation Analysis is used in the example
(Sidnell and Stivers, 2012). In the following, we do
not indicate the action D in the vectors w
AB
and w
B
because it remains the same.
Let us suppose that mother A (knowing her son
B) has created the following partner model: w
AB
=
(1,6,2,1,1,1,0,0,0), i.e. A believes that B will have all
the resources to bake gingersnaps (the value of the
first coordinate equals to 1), further, the action is
much more pleasant (6) than unpleasant (2) for B, it
is useful for B because gingersnaps will be prepared,
and similarly harm because it needs time (both
values 1), obligatory (1) for B because son is obliged
to fulfil mother’s request, but no punishment (0) will
follow if B will not agree; the action is not
prohibited (0) and therefore no punishment (0) will
follow when doing it. The coordinates of the vector
w
AB
should be empirically confirmed, based on A’s
preliminary knowledge about B. (Still, an external
observer can hardly ever determine these values
exactly, only analysing the dialogue. Similarly, as
already said above, people do not operate with exact
numerical values when reasoning.)
Changing of Participants’ Attitudes in Argument-based Negotiation
773
We further suppose that mother A will entice her
son B, assuming that B wants to do D. This
assumption is confirmed by the following dialogue
analysis all the arguments presented by A increase
the pleasantness of the action. (Still, we carry out an
informal analysis here; the automatic analysis of
utterances with the aim to determine the certain
aspects of the action that they influence, remains for
the further work.) The reasoning procedure WISH
applied by A in the initial partner model gives the
decision „do D (cf. Fig.1, steps 1, 2, 3, 10).
Therefore, A makes a proposal, optimistically
looking for B’s agreement:
/---/
A: ´küsimus.
A question.
(0.6) .hhhhh kas sulle pakuks ´pinget ´piparkookide
´küpsetamine.
Do you like to bake gingersnaps?
(1.7)
B: ´praegu.
Just now?
(0.6)
A: jah.
Yes.
(0.6)
Let us further suppose that for B, the initial model
w
B
= (1,1,5,2,1,1,1,0,0), i.e. the resources exist
(value 1), the action is much more unpleasant (5)
than pleasant (1) but nonetheless, more useful (2)
than harm (1) for him. Therefore, B’s initial attitudes
are quite different as compared with A’s guesses.
(Again, here we evaluate the coordinates/attitudes
only approximately, by an informal analysis of the
dialogue.) Thus, B does not want to do D because its
pleansantness is smaller than the unpleasantness (on
the contrary to A’s supposition). Based on w
B
, B
cannot trigger the reasoning procedure WISH in his
mind because the assumption of the procedure is not
fulfilled (cf. Fig. 1). However, he can trigger the
procedure NEEDED (because D is more useful than
harmful for him) which still gives the decision „do
not do D as demonstrated by B’s next utterance:
B: .hhhhhhh ma=i=´tea vist ´mitte.
I don’t know, perhaps not.
As follows from B’s refusal, A has to update the
partner model. The updated model will be w
AB
= (1,6
2,2,1,1,1,0,0,0) because it should give the decision
„do not do D by applying the reasoning procedure
WISH (valid for A) like B got (cf. Fig. 1, steps 1, 2,
6, 7, 8, 11). Here A supposes that B applies this same
reasoning procedure (which actually is not the case)
and she does not change her communicative tactics
(enticement).
Now A presents an argument for increasing the
pleasantness:
A: ja=sis gla´suurimine=ja=´nii.
And then glazing and so on.
(0.6)
At the same time, she increases the value of the
pleasantness in her partner model. We suppose (in
our implementation) that every argument increases
(respectively, decreases) the targeted value by one
unit (by 1). Thus, we consider all the arguments to
be equal, having the value/weight 1 (which still is a
simplification although in the reality, the arguments
could have different weights). New partner model
will be w
AB
= (1,2 3,2,1,1,1,0,0,0). The reasoning
procedure WISH gives the decision „do D in this
model therefore A is again looking for Bs
agreement.
As influenced by A’s argument, B in his turn
increases the value of the pleasantness (by 1) in his
model: w
B
= (1,1+1=2,5,2,1,1,1,0,0). B continuosly
applies the reasoning procedure NEEDED which
again gives the decision „do not do D“:
B: ´ei, ´ei, ´ei ei=´ei.
No, no, no, no, no.
(0.9)
Based on B’s rejection, A has to update the partner
model: w
AB
= (1,3 2,2,1,1,1,0,0,0). Now the
reasoning procedure WISH applied by A gives the
decision „do not do D like B got. When enticing, A
once more increases the pleasantness presenting the
following argument:
A: me saaksime nad ´vanaema=jurde ´kaasa võtta.
We can take them with us when going
to visit grandmother.
(0.4)
New partner model is w
AB
= (1, 2 3,2,1,1,1,0,0,0), the
reasoning procedure WISH gives the decision „do
D.
Influenced by the presented argument, B changes
his attitude about the pleasantness, after that w
B
=
(1,2+1=3,5,2,1,1,1,0,0). The reasoning procedure
NEEDED, continuosly applied by B, gives the
decision „do not do D“:
B: ´präägu ei=´taha.
I don’t want.
(1.3)
A once more has to decrease the pleasantness in her
partner model, getting w
AB
= (1, 3 2,2,1,1,1,0,0,0)
where the procedure WISH gives the result „do not
do D.
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
774
B: aga (.) noh, kas sa mõtled nagu .hhh kui sa tuled
´koju=vä.
But what do you think after you
come home?
A: .hhh ei
No.
ma mõtlen: kui mind kodus ei=´ole.
I think, when I’m not home.
The argument presented by A (I’m not home)
implies w
AB
= (1, 2 3,2,1,1,1,0,0,0) and the decision
will be „do D“. This argument increases the
pleasantness of D also for B (he obviously likes to
act alone, while his mother is not home). After the
update, w
B
= (1,3+1=4,5,2,1,1,1,0,0), and the
procedure NEEDED finally gives the result „do D
B agrees:
B: aa.
Aha.
(0.5) .hhh et ´lähen ostan ´tainast=vä.
Then I’ll go to buy paste, yes?
A: ja=niimodi=jah,
Yes, right.
(1.4)
Nevertheless, the pleasantness of D is less than the
unpleasantness in w
B
, therefore B does not even now
want to do D, but he only takes it as needed (more
useful than harmful). A presents her next argument
for the pleasantness:
.hhh sinna:: ´Pereleiva ´kohvikusse võiksid minna @
´võiksid seal endale ühe ´kohvi lubada=ja @ (2.7) teha
ostmise ´mõnusaks=ja (0.8) ja=siis tulla ´koju=ja? (1.7)
´piparkooke teha=ja
And you could go to Pereleiva cafe
and take a coffee in order to make
buying pleasant for you, and then go
home to bake gingersnaps.
(1.2)
The reasoning procedure WISH gives „do D as
before in the updated partner model w
AB
= (1, 3
4,2,1,1,1,0,0,0). B similarly updates his model:
w
B
=(1,4+1=5,5,2,1,1,1,0,0). Now the pleasantness
equals to the unpleasantness therefore B started to
want to do D. He can yet apply the reasoning
procedure WISH. The result will be „do D“ (cf.
Fig.1, steps 1, 2, 6, 9, 10):
B: okei?
OK.
/---/
(Actually, both procedures NEEDED and WISH
give the same positive decision in w
B
. B prefers to
apply the procedure WISH.) However, A does not
finish the call but she presents an additional (the
last) argument in order to increase B’s wish once
more:
A: .hhhhhhhhhh (0.2) ja ´siis ma tahtsin sulle öelda=et
´külmkapis on: ´sulatatud või tähendab=ned ´külmutatud
ja ´ülessulanud ´maasikad ja ´vaarika´mömm.=hh
And I wanted to tell you that there
are frozen strawberries and raspberries
in the icebox.
B: jah
Yes.
(0.3)
A: palun ´paku endale sealt.
Please help yourself.
/---/
After this argument, both models will change:
w
AB
= (1, 4 5,2,1,1,1,0,0,0)
w
B
=(1,5+1=6,5,2,1,1,1,0,0).
In both models, the reasoning procedure WISH
gives the final decision „do D“.
Only one attitude (the pleasantness) is changing
in the models w
AB
and w
B
during negotiation. That is
because A over and over again presents the
arguments for the pleasantness of the action for B,
i.e. she continuosly applies the communicative
tactics of enticing by trying to trigger the reasoning
procedure WISH in B’s mind. The partner B in the
beginning of the negotiation does not want to do D
but he takes it only useful. When reasoning, B uses
the procedure NEEDED. Nevertheless, A’s
arguments increase the pleasantness of D for B to
such an extent that the wish to do D arises in his
mind: the pleasantness finally becomes equal to the
unpleasantness which makes it possible to trigger the
reasoning procedure WISH. The balance of the other
weigths in w
B
contribute to achieve the result of the
reasoning do D. This final result is the same in
both models w
AB
and w
B
regardless of their
difference in the beginning as well as in the end of
the negotiation.
5 DISCUSSION
The corpus analysis demonstrates that our dialogue
model can be used when describing actual human-
human dialogues. A big challenge when applying it
for the dialogue analysis has been creating of initial
models w
AB
and w
B
. It is hard to determine, only
based on a dialogue text, such models that
adequately describe the attitudes and attitude
changes of the participants during a dialogue.
Another problem is recognition of the
communicative strategies and tactics applied by the
Changing of Participants’ Attitudes in Argument-based Negotiation
775
participants. This needs the linguistic analysis of
utterances in order to understand which aspect of the
action (i.e. the negotiation object) is affected by a
certain utterance (e.g. the pleasantness,
unpleasantness, etc.).
Still, here our primary aim is not the automatic
analysis of dialogues in a natural language but rather
we want to design and develop a DS which follows
norms and regulations of human communication.
When reasoning about doing an action, a subject
is weighing and comparing different aspects of the
action (the availability of resources, its pleasantness,
usefulness, etc.) which are captured in his/her model
of motivational sphere as attitudes.
When attempting to direct B’s reasoning to the
desirable decision (“do D in our case), A presents
several arguments stressing the positive and
downgrading the negative aspects of D. The choice
of A’s argument is based on one hand, on the partner
model and on the other hand, on the (counter)
argument presented by the partner. Still, B is not
obliged to present a counterargument but he can
simply refuse to do the proposed action if his
reasoning gives a negative decision (like in the
considered example). When choosing the next
argument for D, A triggers a reasoning procedure in
her partner model depending on the chosen
communicative tactics, in order to be sure that the
reasoning will give a positive decision after
presenting this argument. B himself can use the same
or a different reasoning procedure triggering it in his
own model. After the updates made both by A and B
in the two models during a dialogue (A’s model of B
will be updated by A, and B’s model of himself will
be updated by B), the models will approach each to
another but, in general, do not equalize. Although
the results of reasoning in both models can be equal,
as demonstrated the example considered in the
previous section. Therefore, A can convince B to do
D even if not having a complete picture of him.
Our dialogue model considers only a limited
kind of dialogues but although, it illustrates the
situation where the dialogue participants are able to
change their attitudes related to the negotiation
object (doing an action) and bring them closer one to
another by using arguments. The initiator A does not
need to know whether the counterarguments
presented by the partner B have been caused by B’s
opposite initial goal or are there simply obstacles
before their common goal and can be eliminated by
A’s arguments. A’s goal, on the contrary is not
hidden from B. Secondly, as said in Section 2.2, the
different communicative tactics used by A are aimed
to trigger different reasoning procedures in B’s
mind. A can fail to trigger the pursued reasoning
procedure in B but however, she can achieve her
communicative goal when having a sufficient
number of statements for supporting her initial goal.
In the considered example (Section 4), A finally
succeeded to trigger the desirable reasoning
procedure and achieved her communicative goal.
We have implemented the model of negotiation
as a simple DS where the computer plays A’s and
the user B’s role. The participants are interacting in
written Estonian. The computer uses ready-made
sentences for presenting arguments but the user can
optionally use another set of ready-made sentences
or also put in free texts which include specific
keywords or key phrases. Based on the
implementation, we can study how attitudes of the
participants are changing in argumentation dialogue.
6 CONCLUSIONS
We are considering the dialogues where the
participants A and B negotiate doing an action D by
B. Their initial communicative goals can conform or
be opposite. They are presenting arguments for and
against doing D, in order to achieve their goals. The
arguments take into account the counterarguments
presented by the partner. In addition, A’s arguments
are based on her partner model whilst B’s arguments
are based on his model of himself. Both models
include the attitudes related to the availability of the
resources, positive and negative aspects of doing D
which have numerical values in our implementation.
Both models are changing during negotiation. We
study how the models are updated in a dialogue, and
track the changes.
We have worked out a model of argument-based
negotiation which includes a reasoning model. When
reasoning about doing an action, the subject is
weighing, summarizing and comparing different
aspects of the action under consideration. If the
positive aspects weigh more than negative then the
decision will be “do the action” otherwise “do not do
it”.
We have implemented the model of negotiation
as a simple DS. Our future work includes
development of the implementation by adding text
processing tools to DS in order to achieve more
human-like interaction of a user with the system.
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
776
ACKNOWLEDGEMENTS
This work was supported by institutional research
funding IUT (20-56) of the Estonian Ministry of
Education and Research, and by the European Union
through the European Regional Development Fund
(Centre of Excellence in Estonian Studies). The
author is also very thankful to anonymous reviewers
for their valuable comments and suggestions.
REFERENCES
Allen, J., 1995. Natural Language Understanding. 2nd ed.
The Benjamin/Cummings Publ. Comp., Inc.
Boella, G., van der Torre, L., 2003. BDI and BOID
Argumentation. In Proc. of CMNA-03. The 3rd
Workshop on Computational Models of Natural
Argument at IJCAI-2003, Acapulco. Available online
at www.cmna.info
Cacioppo, J.D., Petty, R.E., Kao, C.F., Rodriguez, R.,
1986. Central and peripheral routes to persuasion: An
individual difference perspective. In Journal of
Personality and Social Psychology, 51:10321043.
D’Andrade, R., 1987. A folk model of the mind. In
Cultural models of Language and thought, D. Holland
and A. Quinn (Eds.), 112148. London: Cambridge
University Press.
Davies, M., Stone, T., eds., 1995. Folk psychology: the
theory of mind debate. Oxford, Cambridge,
Massachusetts: Blackwell. ISBN 978-0-631-19515-3.
DeVault, D., Mell, J., Gratch, J., 2015. Toward natural
turn-taking in a virtual human negotiation agent. In
AAAI Spring Symposium on Turn-taking and
Coordination in Human-Machine Interaction, 9 p.
AAAI Press, Stanford, CA.
Grosz, B., Sidner, C.L., 1986. Attention, intentions, and
the structure of discourse. In Computational
Linguistics, 12(3), 175204.
Hennoste, T., Gerassimenko, O., Kasterpalu, R., Koit, M.,
Rääbis, A., Strandson, K., 2008. From human
communication to intelligent user interfaces: corpora
of spoken Estonian. In Proc. of LREC, European
Language Resources Association (ELRA). Marrakech,
Morocco, 20252032. Available online at www.lrec-
conf.org/proceedings/¬lrec2008
Koit, M., 2018a. Reasoning and communicative strategies
in a model of argument-based negotiation. In Journal
of Information and Telecommunication TJIT, vol. 2,
14 p. Taylor & Francis Online.
https://doi.org/10.1080/24751839.2018.1448504
Koit, M., 2018b. Modelling attitudes of dialogue
participants reasoning and communicative space. In
A.P. Rocha and J. van den Herik, eds. Proc. of the
International Conference on Agents and Artificial
Intelligence ICAART’18, vol. 2, 581−588. Portugal,
SciTePress.
Koit, M., 2016. Influencing the beliefs of a dialogue
partner. In Christo Dichev, Gennady Agre, eds.
Artificial Intelligence: Methodology, Systems, and
Applications. Lecture Notes in Computer Science:
17th International Conference, AIMSA. Varna,
Bulgaria. Springer International Publishing, 216−225.
(LNAI 9883). https://doi.org/10.1007/978-3-319-
44748-3.
Koit, M., Õim, H., 2014. A computational model of
argumentation in agreement negotiation processes. In
Argument & Computation, 5, 209−236,
https://doi.org/10.1080/19462166.2014.915233
Lewis, M., Yarats, D., Dauphin, Y.N., Parikh, D., Batra,
D., 2017. Deal or no deal? End-to-end learning for
negotiation dialogues. In Proc. of the Conference on
Empirical Methods in Natural Language Processing,
24432453. Copenhagen, Denmark, Assoc. for
Computational Linguistics.
Õim, H., 1996. Naïve Theories and Communicative
Competence: Reasoning in Communication. In
Estonian in the Changing World, 211231. University
of Tartu Press.
Petty, R.E., Briñol, P., 2015. Processes of social influence
through attitude change. In E. Borgida and J. Bargh,
eds. APA Handbook of Personality and Social
Psychology (Vol.1): Attitudes and social cognition,
509545. Washington, D. C.: APA Books.
Petty, R.E., Briñol, P., Teeny, J., Horcajo, J., 2018. The
elaboration likelihood model: Changing attitudes
toward exercising and beyond. In B. Jackson, J.
Dimmock, and J. Compton, eds. Persuasion and
communication in sport, exercise, and physical
activity, 2237. Abington, UK: Routledge.
Rahwan, I., McBurney, P., Sonenberg, L., 2003. Towards
a theory of negotiation strategy (a preliminary report).
In S. Parsons and P. Gmytrasiewicz, eds. Game-
Theoretic and Decision-Theoretic Agents (GTDT).
Proc. of an AAMAS-2003 Workshop, 7380.
Melbourne, Australia.
Scherer, K.R., 2000. Psychological models of emotion. In
J. C. Borod, ed. The neuropsychology of emotion,
137162. Oxford.
Sidnell, J., Stivers, T., eds. 2012. Handbook of
Conversation Analysis. Boston: Wiley-Blackwell.
ISBN 978-1-4443-3208-7
Changing of Participants’ Attitudes in Argument-based Negotiation
777