Evaluation of Dishonest Argumentation based on an Opponent Model:
A Preliminary Report
Kazuyuki Kokusho and Kazuko Takahashi
School of Science&Technology, Kwansei Gakuin University, 2-1, Gakuen, Sanda, 669-1337, Japan
Keywords:
Argumentation, Strategy, Persuasion, Dishonesty, Opponent Model.
Abstract:
This paper discusses persuasive dialogue in a case where dishonesty is permitted. We have previously pro-
posed a dialogue model based on a predicted opponent model using an abstract argumentation framework, and
discussed the conditions under which a dishonest argument could be accepted without being detected. How-
ever, it is hard to estimate the outcome of a dialogue, or identify causality between agents’ knowledge and
the result. In this paper, we implement our dialogue model and execute argumentations between agents under
different conditions. We analyze the results of these experiments and discuss about them. In brief, our results
show that the use of dishonest arguments affects the likelihood of successfully persuading the opponent, or
winning a debate game, but we could not identify a relationship between the results of a dialogue and the
initial argumentation frameworks of the agents.
1 INTRODUCTION
The aim of persuasion is to change an opponent’s
mind. An argumentation framework is a useful mech-
anism to manage a persuasive dialogue as a computa-
tional model, and there have been many studies on ar-
gumentation frameworks (Amgoud and de Saint-Cyr,
2013; Bench-Capon, 2003; Black and Hunter, 2015;
Prakken, 2006; Rahwan and Simari, 2009). In the
persuasion dialogue model, agents generally have in-
dependent knowledge or beliefs, which change every
time they get their opponent’s argument. Therefore,
when there are multiple possible counter-arguments
for the same argument, persuasion sometimes suc-
ceeds, but sometimes fails, depending on which ar-
gument is selected. Strategic selection of an ar-
gument can be done considering what an opponent
knows (Hunter, 2015; Rienstra et al., 2013).
Agents sometimes try to persuade their oppo-
nents by presenting dishonest arguments and more-
over, they may be revealed. In this case, agents need
their prediction on their opponent’s knowledge or be-
lief. Therefore, it is essential to make a dialogue
model with on an opponent model, if we formalize a
dishonest argument and suspect of the truth of an ar-
gument. However, the possibility that an agent could
present a dishonest argument has not been included
in most of strategic argumentation models proposed
so far. Takahashi et al. formalized dishonest argu-
mentation using an opponent model (Takahashi and
Yokohama, 2017). Of the several types of dishonesty,
they focused on deception. That is, an agent inten-
tionally hiding something they know.
Consider the following situation in which students
are selecting a research laboratory
1
. This example
shows how opponent models are used in giving a dis-
honest argument and pointing out the dishonesty.
[Labo Selection Example]
Alice tries to persuade Bob to apply to the
same laboratory. Both know that Professor
Charlie is strict and not generous. Alice, who
prefers strict professors, wants to apply to
Charlie’s laboratory. However, Bob wants to
work for a generous professor, but not for a
strict professor. Alice knows Bob’s prefer-
ence. In addition, Alice thinks that Bob only
knows Charlie is strict as Charlie’s reputation.
Alice considers that if she said “Let’s apply to
Charlie’s laboratory, because he is strict, then Bob
might reject her proposal. Therefore, she says “Let’s
apply to Charlie’s laboratory, because he is generous,
hiding the fact that Charlie is not generous, to per-
suade Bob. However, Bob, who knows Charlie’s rep-
utation, suspects its truth, and may say, “No, I don’t
want to, because Charlie is not generous. Don’t try
1
This is another version of the example shown in (Taka-
hashi and Yokohama, 2017).
268
Kokusho, K. and Takahashi, K.
Evaluation of Dishonest Argumentation based on an Opponent Model: A Preliminary Report.
DOI: 10.5220/0006648402680275
In Proceedings of the 10th International Conference on Agents and Artificial Intelligence (ICAART 2018) - Volume 1, pages 268-275
ISBN: 978-989-758-275-2
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
to persuade me by hiding that fact. In this example,
Alice deceives Bob by predicting his response. But as
Bob knows what Alice knows, he suspects her of dis-
honesty and challenges her deception. A reason for a
failure of the persuasion is that Alice does not know
that Bob knows Charlie is not generous.
The success of persuasion depends on what an
agent knows and what strategy is taken. Parsons et
al. investigated the relationships between agents’ ini-
tial knowledge and the outcome of the dialogue for
several agents’ tactics (Parsons et al., 2003). Yoko-
hama et al. developed a strategy using an oppoment
model for honest argumentation that will never fail to
persuade, and proved its correctness (Yokohama and
Takahashi, 2016). It is interesting to consider whether
such a strategy exists in the case of a dishonest dia-
logue, and it is particularly interesting to investigate
the effect of the deception and being silent, on the
outcome of a dialogue.
There have been many studies on computational
argumentation, but few of these studies evaluated the
agents’ strategies, where probabilistic approach is ap-
plied to learn an opponent model. They proposed
metrics for evaluating a dialogue, such as a length of
a dialogue and the number of arguments that agents
have agreed on (Hadjinikolis et al., 2013; Thimm,
2014; Rahwan et al., 2009). And dishonesty has not
been considered there.
In this paper, we implement a dialogue based on
the model proposed in (Takahashi and Yokohama,
2017), where the agent predicts their opponent’s argu-
mentation framework. We define the concepts of dis-
honest argument and suspicious argument, by means
of the acceptance of arguments in this model. We ex-
ecute argumentations under different conditions and
show the experimental results and their analysis. The
main purpose of the experiment is to investigate the
effect of deception or being silent, and to identify
the relationship between these strategies and partic-
ular protocols or argumentation frameworks.
The results show that the use of dishonest argu-
ments affects the likelihood of successfully persuad-
ing the opponent, or winning a debate game. But we
could not identify a relationship between the results
of a dialogue and the argumentation framework of the
agents.
The rest of the paper is organized as follows.
Section 2 describes the argumentation framework on
which our model is based. In Section 3 we formal-
ize our dialogue protocol and the concepts related to
dishonesty. In Section 4, we present and evaluate the
results of our simulations. In Section 5, we compare
our approach to other approaches. Finally, in Sec-
tion 6 we present our conclusions.
2 ARGUMENTATION
FRAMEWORK
Dung’s abstract argumentation framework is defined
as the pair of a set and a binary relationship on the set
(Dung, 1995).
Definition 2.1 (argumentation framework). An argu-
mentation framework is defined as a pair AR,AT
where AR is the set of arguments and AT is a binary
relationship on AR, called an attack. If (A, A
) AT ,
we say that A attacks A
.
Definition 2.2 (sub-AF). Let A F
1
= AR
1
,AT
1
and
AF
2
= AR
2
,AT
2
be argumentation frameworks. If
AR
1
AR
2
and AT
1
= AT
2
(AR
1
× AR
1
), then it
is said that AF
1
is a sub-argumentation framework
(sub-AF, in short) of AF
2
and denoted by AF
1
AF
2
.
We define the semantics of a given argumentation
framework based on labelling (Baroni et al., 2011).
Definition 2.3 (labelling). Let AF = AR,AT be an
argumentation framework. A labelling is a total func-
tion L
AF
: from AR to {in,out,undec}.
The idea underlying the labelling is to give each
argument a label. Specifically, the label in means that
the argument is accepted in the argumentation frame-
work, the label out means that the argument is re-
jected, and the label undec means that the argument
is neither accepted nor rejected.
Definition 2.4 (complete labelling). Let AF =
AR,AT be an argumentation framework and L
AF
be its labelling. If the following condition holds for
each A AR, then L
AF
is a complete labelling of
AF .
1. L
AF
(A) = in iff A
AR ( (A
,A) AT
L
AF
(A
) = out ).
2. L
AF
(A) = out iff A
AR ( (A
,A) AT
L
AF
(A
) = in ).
3. L
AF
(A) = undec iff L
AF
(A) ̸= in L
AF
(A) ̸=
out.
Note that if an argument A is attacked by no argu-
ments, then L
AF
(A) = in.
There are various semantics based on labelling,
but here, we use the term “labelling” to mean
grounded labelling. Every argumentation framework
has a unique grounded labelling.
Definition 2.5 (grounded labelling). Let AF be an
argumentation framework. The grounded labelling of
AF is a complete labelling L
AF
where a set of argu-
ments that are labelled ‘in’ is minimal with respect to
set inclusion.
Evaluation of Dishonest Argumentation based on an Opponent Model: A Preliminary Report
269
For example, Figure 1 shows an argumentation
framework ⟨{A,B,C, D, E},{(A,B),(B,C), (C, D),
(D,E),(E,D)}⟩ with its grounded labelling.
A B C D E
in
inout undec undec
Figure 1: Labelled argumentation framework.
3 ARGUMENTATIVE DIALOGUE
MODEL
We describe the argumentative dialogue model pre-
sented in (Takahashi and Yokohama, 2017).
3.1 Argumentation Frameworks
An argumentative dialogue is a sequence of argu-
ments provided by agents following the protocol.
Each agent has her own argumentation framework, as
well as her prediction of the opponent’s argumenta-
tion framework, and makes a move in a dialogue us-
ing them. When an argument is given, then these ar-
gumentation frameworks are updated.
Consider a dialogue between agents X and Y . We
assume a universal argumentation framework (UAF)
UAF as usual that contains every argument that can
be constructed from all the available information in
the universe. We naturally assume that UAF does
not contain an argument that attacks itself. Let A F
X
and A F
Y
be argumentation frameworks of X and Y ,
respectively, where A F
X
, AF
Y
UAF ; let PAF
Y
and PAF
X
be Xs prediction of Y s argumentation
framework and Y s prediction of X s argumentation
framework respectively. That is, X has two argu-
mentation frameworks, AF
X
and PAF
Y
, and Y has
AF
Y
and PA F
X
. We assume several inclusion re-
lationships among these argumentation frameworks.
First, we assume PAF
X
AF
X
and PAF
Y
AF
Y
,
because common sense or widely prevalent facts are
known to all agents, while there may be some facts
that only the opponent knows and other facts that the
agent is not sure whether the opponent knows. Sec-
ond, we assume that PAF
Y
AF
X
, PAF
X
AF
Y
,
because a prediction is made using an agent’s own
knowledge.
3.2 A Dialogue Protocol
We introduce three types of acts in a persuasion di-
alogue to focus on clarifying the effect of deception,
although other acts can be considered.
Definition 3.1 (act). An act is assert, suspect, or
excuse.
Definition 3.2 (move). A move is a triple (X,R, T ),
where X is an agent, R is an argument, and T is an
act.
Definition 3.3 (dialogue). A dialogue d
k
(k 0)
between a persuader P and her opponent C on a
subject argument A
0
is a finite sequence of moves
[m
0
,.. . , m
k1
] where each m
i
(0 i k 1) is in the
form of (X
i
,R
i
,T
i
) and the following conditions are
satisfied:
d
0
= [ ];
and if k > 0,
(i) X
0
= P, R
0
= A
0
and T
0
= assert.
(ii) For each i (0 i k 1), X
i
= P if i is even,
X
i
= C if i is odd.
(iii) For each i (0 i k 1), m
i
is one of the
allowed moves. An allowed move is a move
that obeys a dialogue protocol, as defined in
Definition 3.4.
For a dialogue d
k
= [m
0
,.. . , m
k1
], an argumen-
tation framework of agent X for d
k
is denoted by
AF
d
k
X
; an agent Xs prediction of Y s argumentation
framework for d
k
is denoted by PAF
d
k
Y
. They are
defined constructively. AF
d
0
X
and PAF
d
0
Y
are Xs ar-
gumentation framework and her prediction of Y s ar-
gumentation framework given at an initial state where
A
0
AF
d
0
X
0
.
A dialogue protocol is a set of rules for each act.
An agent can give an argument contained in her argu-
mentation framework at an instant. The preconditions
of each act of agent X for d
k
are formalized as fol-
lows. Hereafter, the symbol in a move stands for
anonymous.
Definition 3.4 (allowed move). Let X ,Y be agents,
and d
k
= [m
0
,.. . , m
k1
] be a dialogue. Let AF
d
k
X
=
AR
d
k
X
,AT
d
k
X
and PAF
d
k
Y
= PAR
d
k
Y
,PAT
d
k
Y
be Xs
argumentation framework and X’s prediction of Y s
argumentation framework for d
k
, respectively. If a
move m
k
satisfies the precondition, then m
k
is said to
be an allowed move for d
k
.
When k = 0, (X, A
0
,assert) is an allowed move
where A
0
is a subject argument.
When k > 0, the precondition of each move is de-
fined as follows.
(X ,A, assert):
m
k
̸= m
i
for i (0 i < k )
m
k1
̸= (Y, , suspect)
j (0 j < k); m
j
= (Y,A
, ) and
(A,A
) AT
d
k
X
(X ,A, suspect):
ICAART 2018 - 10th International Conference on Agents and Artificial Intelligence
270
m
k1
̸= (Y, , suspect)
j (0 j < k); m
j
= (Y,A
, ) and
(A,A
) PAT
d
k
Y
L
PAF
d
k
Y
(A) ̸= out
(X ,A, excuse):
m
k1
= (Y,A
,suspect) and (A, A
) AT
d
k
X
and ( ¬∃(A
0
,A
1
,.. . , A
n
), (n > 1) where
A
0
= A
n
= A, A
1
= A
and (A
i1
,A
i
) AT
d
k
X
(1 i n) )
(X , , pass)
Agent can either give a counterargument A to an
argument A
previously given by her opponent or just
a pass. The same move of type assert is not allowed
more than once throughout the dialogue.
A move of type suspect is to point out: “I sus-
pect that you used argument A
while hiding another
argument A. Y then has to demonstrate that they are
not being deceptive by immediately giving a coun-
terargument. This is a move of type excuse. As for
suspect, a loop is avoided. An agent can give either
a move of type assert and suspect on the same argu-
ment when both are allowed. An agent who is sub-
jected to a move of type excuse is considered to have
the burden of proof as Prakken et al. said (Prakken
et al., 2005).
A move of type pass is passing on the turn, with-
out giving any information. An agent can give it in
two different ways: only when there is no other al-
lowed moves (restricted use) or any time (free use).
A pass move of a free use can be regarded as a kind
of strategy of being silent and giving no information.
3.3 Update of Argumentation
Frameworks
At each move, an argument in each agent’s argumen-
tation framework is disclosed. This may cause new
arguments and new attacks to be put forward. A move
of type suspect represents a suspicion on the previous
argument, and generates no new arguments but for it-
self. This leads us to the following definition of an
update of an argumentation framework with respect
to a particular argument.
Definition 3.5 (update of argumentation framework).
Let UAF = UAR,UAT be a UAF. Let A F =
AR,AT be an argumentation framework, A UAR,
and S be a set of arguments caused to be generated
from A using a deductive inference, where the condi-
tion “if A AR then S AR” holds. Then, AF
=
AR AR
,AT AT
is said to be an argumentation
framework of A F updated by A, where AR
= {A}S
and AT
= {(B,C)|(B,C) UAT,(B AR
,C AR)
(B AR,C AR
) (B AR
,C AR
)}
2
.
After the move m
k
= (X,R,T ), the following up-
dates are performed:
d
k+1
is obtained from d
k
by adding m
k
to its end
A F
d
k+1
Y
, PA F
d
k+1
X
and PAF
d
k+1
Y
are argumenta-
tion frameworks of A F
d
k
Y
, PAF
d
k
X
and PAF
d
k
Y
updated by R, respectively
A F
d
k
X
remains unchanged
3.4 Dishonesty
Definition 3.6 (honest/dishonest move). For a dia-
logue d
k
= [m
0
,.. . , m
k1
] where m
k
= (X, R,T ), if
L
AF
d
k
X
(R) = in, then m
k
is said to be Xs honest move
and R is said to be an honest argument; otherwise, m
k
is said to be Xs dishonest move and R is said to be a
dishonest argument.
Definition 3.7 (suspicious move). For a dialogue
d
k
= [m
0
,.. . , m
k1
] where m
k1
= (X,R,assert) or
m
k1
= (X ,R,excuse), if L
PAF
d
k
X
(R) ̸= in, then m
k1
is said to be a suspicious move for Y , and R is said to
be a suspicious argument.
Intuitively, honest move means that an agent gives
an argument that she believes, and suspicious move
means that she cannot believe her opponent argument.
Note that “honest” is a concept for the persuader,
whereas “suspicious” is that for her opponent. Hence,
a dishonest argument is not always a suspicious argu-
ment and a suspicious argument is not always a dis-
honest argument.
3.5 An Example
Consider the Labo-Selection Example shown in Sec-
tion 1. Alice is a persuader P and Bob is a persuadee
C. Let A
0
,A
1
,.. . , A
5
be the following arguments.
A
0
: apply to Charlie’s laboratory
A
1
: do not apply to Charlie’s laboratory
A
2
: apply to Charlie’s laboratory
because he is generous
A
3
: apply to Charlie’s laboratory
because he is strict
A
4
: do not apply to Charlie’s laboratory
because he is strict
A
5
: Charlie is not generous
2
AF
can be calculated without assuming UA F and S, if
we handle an argumentation framework instantiated with
logical formulas. In this case, we construct an argumen-
tation framework by logical deduction from a given set of
formulas (Amgoud et al., 2000; Yokohama and Takahashi,
2016).
Evaluation of Dishonest Argumentation based on an Opponent Model: A Preliminary Report
271
We show one example of initial argumentation
frameworks in Figure 2. Assume that AF
P
is the
same with a given UAF.
0
A
1
A
2
A
3
A
4
A
5
A
2
A
4
A
1
A
2
A
4
A
5
A
5
A
4
A
AF
P
PAF
C
AF
C
PAF
P
Figure 2: Initial state of argumentation frameworks.
A dialogue proceeds as follows:
m
0
= (P,A
0
,assert)
m
1
= (C, A
1
,assert)
m
2
= (P,A
2
,assert)
m
3
= (C, A
5
,suspect)
In this case, since L
AF
d
3
P
(A
2
) = out (Figure 3(a)),
m
2
is
P
s dishonest move, and since
L
PAF
d
3
P
(
A
2
) =
out (Figure 3(b)), m
2
is a suspicious move for C which
causes to give m
3
, a move of type suspect.
0
A
1
A
2
A
3
A
4
A
5
A
ŽƵƚ
ŝŶ
ŽƵƚ
ŽƵƚ
ŝŶ
ŝŶ
0
A
1
A
2
A
5
A
ŝŶ
ŽƵƚ
ŽƵƚ
ŝŶ
(a) AF
d
3
P
(b)PAF
d
3
P
Figure 3: Argumentation frameworks for d
3
.
3.6 Termination
There are two ways to terminate a persuasive dia-
logue with a dishonest argument. In the first case,
the agent cannot make an excuse when their opponent
pointed out their deception. In this case, the agent is
regarded as dishonest because she cannot answer her
opponent’s challenge, regardless of whether she actu-
ally made a dishonest move. In the second case, there
exists d
k
such that neither agent can make a move of
type assert or suspect. In this case, we say that the
persuasion of P on subject argument A
0
succeeds if
L
AF
d
k
C
(A
0
) = in holds; and fails, otherwise. After
one agent has made a pass move, the other agent may
present additional arguments, until neither agent has
any further arguments.
Definition 3.8 (win/lose of persuasion). If a persua-
sion succeeds or the persuadee is regarded as dishon-
est, then it is said that the persuader wins. If persua-
sion fails or the persuader is regarded as dishonest,
then it is said that the persuader loses.
If we consider a dialogue as a debate game,
win/lose of the game is judged by the arguments dis-
closed so far. We construct a committed argumen-
tation framework (CAF) in addition to agents’ inner
argumentation frameworks.
Definition 3.9 (committed argumentation frame-
work). For a dialogue d
k
= [m
0
,.. . , m
k1
] where
m
k
= (X,A, T ), committed argumentation framework
C AF
d
k
= AR
d
k
,AT
d
k
is defined as follows.
C AF
d
0
=
/
0,
/
0
C AF
d
k
=
AR
d
k1
{A}, AT
d
k1
(A, A
)
(T ̸= pass)
AR
d
k1
,AT
d
k1
(T = pass)
where k > 0 and A attacks the previous argument A
.
Definition 3.10 (win/lose of debate game). Let
C AF
d
k
be the CAF at the termination of the dialogue.
It is said that the agent who proposed the subject ar-
gument wins if L
C AF
d
k
(A
0
) = in holds; and loses,
otherwise.
4 EXPERIMENTAL RESULTS
4.1 Condition
A dialogue proceeds between a persuader P and a per-
suadee C according to the dialogue model described
in Section 3.
P aims to persuade C to accept a subject argument,
whereas C makes allowed moves but does not have a
goal.
We take an arbitrary UAF with a tree structure that
satisfies the following conditions
3
:
The root node is a subject argument.
The number of nodes is less than 20.
The number of child nodes for each node is at
most three.
The number of leaf nodes is five to seven.
For a given UAF UA F , we make four initial ar-
gumentation frameworks: AF
P
, AF
C
, PAF
C
and
PAF
P
. These frameworks all satisfy the inclusion
relationships.
3
These conditions are based on real argumentation experi-
ments regarding a social issue conducted in a certain labo-
ratory.
ICAART 2018 - 10th International Conference on Agents and Artificial Intelligence
272
To simplify the problem, we assume that one of
AF
P
and A F
C
is the biggest, that is, identical to
UAF ; and the other is about half its size, that is, it
consists of about half the number of arguments and
half the number of attacks. In addition, to invoke a
dialogue, we assume that the smaller one inherits half
of the paths from the UAF , and is denoted half . As
for PAF
P
and PAF
C
, we assume that one of them is
the biggest, that is, half ; and the other is the smallest,
that is, the empty set. Under these conditions, we set
the initial set of argumentation frameworks for a given
UAF as one of the following four types in Table 1.
Table 1: Argumentation frameworks for agents.
Type A F
P
AF
C
PAF
C
PAF
P
I UA F half half
/
0
II half UA F
/
0 half
III UA F half
/
0 half
IV half UA F half
/
0
We defined ten sets of argumentation frameworks
for each type and then simulated all possible dia-
logues for ve different UAFs. As a result, 50 cases
are investigated for each type.
We implemented seven dialogue models (a)(g)
that use different protocols: honest and dishonest
strategies, and a range of strategies for making pass
moves. We executed argumentations for each model
and compared their results.
4.2 Results
We count the number of Ps win and that of Ps lose
in all dialogues in each case. If the number of wins is
greater than or equal to the number of losses, we say
that P is dominant in the case; otherwise, we say that
C is dominant. For example, if P wins 125 dialogues,
and loses 67, then P is considered to be dominant in
this case. Table 2 and Table 3 show the ratio of cases
in which P was dominant out of the 50 cases tested
for each type. We investigated persuasion dialogue
and debate game, respectively.
In the following tables, the term ‘honest’ means
that an agent only makes honest moves, while ‘dis-
honest’ means that an agent makes both honest and
dishonest moves. The notation for honesty is as fol-
lows: ‘dd’ means that P and C are both dishonest,
‘hd’ means P is honest and C is dishonest, and ‘dh’
means P is dishonest and C is honest. The notation
for pass moves is as follows: ‘free’ means that pass
moves are allowed at any time, whereas ‘rst’ means
that they are restricted when an agent cannot make
any other move. The term ‘termination’ indicates Cs
termination strategy: ‘both’ means that if P makes a
pass move, then C also makes a pass move immedi-
ately, and ‘one’ means that C continues to make an
allowed move as possible as C can.
Table 2 shows the results of persuasion, and com-
pares the effects of dishonesty and pass moves. Com-
paring the result of models (a), (b) and (c), P is not
dominant more often in (b) than that in (a) and (c).
It shows that the number of dialogues which P wins
increases by giving dishonest arguments. Comparing
Type I and III, P is dominant in fewer cases when P
has more predictions, which is against our expecta-
tion. It is because the forms of AF
C
differ between
Type I and III. It follows that the result of the dia-
logue depends on the form of the initial argumenta-
tion frameworks. We did not find any significant dif-
ferences between honesty and dishonesty, or the two
ways of making a pass move.
Table 2: The ratio of cases in which P is dominant (%):
persuasion.
model (a) (b) (c) (d)
Type
honesty dd hd dh dd
pass free free free rst
I 36 32 36 36
II 40 40 40 40
III 62 60 62 50
IV 40 40 40 64
Table 3 shows the results of the debate game.
We investigated the effect of the different pass move
strategies. Both agents are dishonest. Ps pass move
strategy is fixed as follows: when C gives a pass
move, then P also makes a pass move immediately
even if she still has allowed moves; in the other situ-
ations, she can give a pass move only when she has
no other allowed moves. We investigated the effect of
the different pass move strategies by varying Cs pass
move strategy.
Table 3 shows the effect of Cs strategy. In model
(g), if C makes a pass move, then P makes a pass
move. This terminates the argumentation immedi-
ately. At that time, the label of the subject argument in
CAF is in, which means that P has won in the debate
game. C can make a pass move at any time, which
causes P to be dominant more often than in cases (e)
and (f).
Next, we investigated the effect of deception by
examining each dialogue in specific cases. We com-
pared the result of model (b) in which P is honest, to
the others in which P is dishonest, for each initial set
of argumentation frameworks. There exists no set of
argumentation frameworks for which P is dominant
in (b) and C is dominant in the other models. In addi-
tion, we have found two cases in Type I in wich P has
Evaluation of Dishonest Argumentation based on an Opponent Model: A Preliminary Report
273
no possibility to win in model (b) while C is regarded
as dishonest (P wins) in some dialogues in the other
models. It is because the increase of Ps arguments in
number by giving dishonest arguments causes to in-
crease a chance to reveal her opponent dishonesty. In
these cases, P can win by making appropriate moves
if she is dishonest, but not if she is honest.
Table 3: The ratio of cases in which P is dominant (%):
debate game.
model (e) (f) (g)
Type
pass rst rst free
termination one both both
I 34 34 68
II 22 22 24
III 58 58 62
IV 16 44 32
Table 4 shows the number of dialogues in which
P reveals Cs dishonesty and its ratio against all the
dialogues for two specific cases. For example, in case
1, P loses all dialogues in model (b), while C is re-
garded as dishonest (P wins) in 22 dialogues, which
is 2.0 percent of all dialogues and P loses the remain-
ing dialogues in model (a).
Table 4: The number of dialogues in which P reveals Cs
dishonesty and its ratio.
case 1
model (b) (a) (d) (e) (f)
number 0 22 12 2 12
ratio (%) 0 2.0 2.5 11.1 25
case 2
model (b) (a) (d) (e) (f)
number 0 2 0 1 2
ratio (%) 0 7.4 0 25 33.3
4.3 Discussions
Even if an agent does not have any predictions, she
may reveal the dishonesty of her opponent. This is
counter-intuitive because the agent does not seem to
make a move of type suspect. This is because the
same argument can be given with a different act such
as (X,assert,A) and (X, suspect, A). In this case, her
own argumentation framework originally contains ar-
gument A. If she makes the move (X,assert, A), then
argument A is added to her prediction. Later, together
with arguments presented by her opponent, her pre-
dictions have accumulated such that the label of A
is now out. As a result, she may make the move
(X,suspect,A).
An agent loses if she cannot return a move of type
excuse immediately upon receiving a suspect move.
It follows that, when possible, it is more advanta-
geous to make a move in the form (X, A,suspect) than
(X,A,assert).
The ratio of Ps dominance appears to depend
on the form of the initial argumentation frameworks.
However, our simulations did not show that making a
dishonest argument or a pass move has any significant
effect, regardless of the initial argumentation frame-
works. Therefore, whether there is a relationship be-
tween the initial argumentation frameworks and the
outcome of the argument remains an open question.
The next point to consider will be how to determine
moves strategically. We need to conduct more exper-
iments and further analysis to address these points.
5 RELATED WORKS
Parsons et al. investigated the relationships between
agents’ initial knowledge and the outcome of the dia-
logue (Parsons et al., 2003). They clarified their char-
acteristics and examined the effect of agents’ tactics
theoretically.
Thimm provided an excellent survey of strate-
gic argumentation (Thimm, 2014). He classified the
treatment of strategic argumentation from a variety of
viewpoints, including game theory, opponent model,
an so on.
In this paper, we assumed that an opponent model
is given in advance, which is a similar assumption
in most other works. On the other hand, some stud-
ies have investigated the process of updating the op-
ponent model during the dialogue. Hunter studied
persuasive dialogues, and how to evaluate them us-
ing an opponent model (Hunter, 2015). He proposed
an asymmetric model between a system and a user,
where a system makes moves such as in f orm and
challenge and receives simple yes/no answer from
the user. Considering a user’s reply, a user model that
a system has is updated using probability. Since it is
asymmetric, a user’s argument is so restricted, and he
does not focus on the strategy itself.
Rienstra el al.s work is the most relevant to ours.
They proposed several kinds of opponent models and
presented experimental results from arguments based
on these models. The prediction in our paper corre-
sponds to the ‘simple model’ in their paper (Rienstra
et al., 2013). They evaluated each dialogue upon ter-
mination and updated the opponent model probabilis-
tically, while we don’t use probability. In addition,
they do not handle a dishonest argument and seman-
tics of an argument.
ICAART 2018 - 10th International Conference on Agents and Artificial Intelligence
274
Several platforms have been developed to com-
pare agents’ strategic arguments, and there are strate-
gic argument competitions (Yuan et al., 2008). How-
ever, these do not include any perspectives on dishon-
est arguments.
Sakama formalized dishonesty using an argumen-
tation framework as a debate game (Sakama, 2012;
Sakama et al., 2015). Different from persuasion, they
judged the outcome of the dialogue by committed ar-
gumentation framework, thus each agent needs not es-
timate an opponent model. He also investigated some
properties of his model theoretically but did not for-
malize the detection of, or excuses for, deception.
6 CONCLUSIONS
We have presented the results of simulations of dis-
honest argumentation based on an opponent model.
This is the first attempt to present an evaluation of dis-
honest argumentation. The results show that the use
of dishonest arguments affects the chances of success-
fully persuading an opponent, or winning a debate
games. But we could not identify a relationship be-
tween the result of a dialogue and the argumentation
frameworks of agents.
As this is a preliminary report, only simple cases
are handled. In future, we should perform more ex-
periments on various types of argumentation frame-
works that include cyclic structures, and facilitate
more precise analysis. We will also investigate the
results under different semantics. since concepts re-
garding dishonesty depend on the semantics.
REFERENCES
Amgoud, L. and de Saint-Cyr, F. (2013). An axiomatic ap-
proach for persuasion dialogs. In ICTAI 2013, pages
618–625.
Amgoud, L., Maudet, N., and Parsons, S. (2000). Modeling
dialogues using argumentation. In ICMAS2000, pages
31–38.
Baroni, P., Caminada, M., and Giacomin, G. (2011). An
introduction to argumentation semantics. The Knowl-
edge Engineering Review, 26(4):365–410.
Bench-Capon, T. (2003). Persuasion in practice argument
using value-based argumentation frameworks. Jour-
nal of Logic and Computation, 13(3):429–448.
Black, E. and Hunter, A. (2015). Reasons and options for
updating an opponent model in persuasion dialogues.
In TAFA2015.
Dung, P. (1995). On the acceptability of arguments and
its fundamental role in nonmonotonic reasoning, logic
programming and n-person games. Artificial Intelli-
gence, 77(2):321–358.
Hadjinikolis, C., Siantos, Y., Modgil, S., Black, E., and
McBurney, P. (2013). Opponent modelling in persua-
sion dialogues. In IJCAI2013, pages 164–170.
Hunter, A. (2015). Modelling the persuadee in asymmet-
ric argumentation dialogues for persuasion. In IJ-
CAI2015, pages 3055–3061.
Parsons, S., Wooldridge, M., and Amgoud, L. (2003). On
the outcomes of formal inter-agent dialogues. In AA-
MAS2003, pages 616–623.
Prakken, H. (2006). Formal systems for persuasion
dialogue. The Knowledge Engineering Review,
21(2):163–188.
Prakken, H., Reed, C., and Walton, D. (2005). Dialogues
about the burden of proof. In ICAIL2005, pages 115–
124.
Rahwan, I., Lason, K., and Tohm’e, F. (2009). A character-
ization of strategy-proofness for grounded argumenta-
tion semantics. In IJCAI2009, pages 251–256.
Rahwan, I. and Simari, G. (2009). Argumentation in Artifi-
cial Intelligence. Springer.
Rienstra, T., Thimm, M., and Oren, N. (2013). Opponent
models with uncertainty for strategic argumentation.
In IJCAI2013, pages 332–338.
Sakama, C. (2012). Dishonest arguments in debate games.
In COMMA2012, pages 177–184.
Sakama, C., Caminada, M., and Herzig, A. (2015). A for-
mal account of dishonesty. The Logic Journal of the
IGPL, 23(2):259–294.
Takahashi, K. and Yokohama, S. (2017). On a formal
treatment of deception in argumentative dialogues. In
EUMAS-AT2016, Selected papers, pages 390–404.
Thimm, M. (2014). Strategic argumentation in multi-agent
systems. Kunstliche Intelligenz, 28(3):159–168.
Yokohama, S. and Takahashi, K. (2016). What should an
agent know not to fail in persuasion? In EUMAS-
AT2015, Selected papers, pages 219–233.
Yuan, T., Schulze, J., Devereux, J., and Reed, C. (2008).
Towards an arguing agents competition: Building on
argumento. In CMNA.
Evaluation of Dishonest Argumentation based on an Opponent Model: A Preliminary Report
275