SOURCE SENSITIVE ARGUMENTATION SYSTEM
Chee Fon Chang, Peter Harvey and Aditya Ghose
Decision Systems Lab
University of Wollongong, NSW 2522, Australia
Keywords:
Multi-agent Argumentation, Multi-agent Negotiation.
Abstract:
There exist many approaches to agent-based conflict resolution. Of particular interest are approaches which
adopt argumentation as their underlying conflict resolution machinery. In most argumentation systems, the
argument source plays a minimal role. We feel that ignoring this important attribute of human argumentation
reduces the capabilities of current argumentation systems. This paper focuses on the importance of sources in
argumentation and extends this to the notion of credibility of agents.
1 INTRODUCTION
There have been many recent developments on logical
systems for defeasible argumentation and the use of
argumentation in multi-agent interaction such as ne-
gotiation, group decision making and dispute media-
tion. Systems such as (Prakken, 1993; Dung, 1995;
Bondarenko et al., 1997; Vreeswijk, 1997; Dung and
Son, 2001), provides formalisation to defeasible or
non-monotonic reasoning. These systems focus on
representation and general interaction that exist be-
tween arguments. (Jennings et al., 1998; Sierra et al.,
1997; Kraus et al., 1998; Parsons et al., 1998), studies
argumentation as a conflict resolution machinery of
negotiation, where arguments for an offer should per-
suade the other party to accept the offer. These recent
applications of argumentation in multi-agent systems
have drawn great interest. If one was to use defeasi-
ble argumentation as the conflict resolution machin-
ery, modifications are required.
We view argumentation as begin (simultaneously)
a process for information exchange, a process for
conflict and an approach to knowledge representa-
tion/reason. In (Kraus et al., 1998), the authors stated
that argumentation is a process that has the struc-
ture of a logical proof however does not hold the
strength of one. Consistent with this view, we believe
that multi-agent argumentation focuses not on logical
“truth” but on convincing/persuading other agents of
a particular view or position. As such, how an indi-
vidual is perceived in their community will effect the
acceptance of their arguments.
In the next section, we will motivate this work with
a simple example. In section 2, a formalisation of
source sensitive argumentation system is provided.
Section 3 provides the procedures and worked exam-
ples. Section 4 describes the results and behavioural
impacts.
1.1 Motivation
We will first motivate our work using an adaptation of
an example from (Verheij, 1994) and then extend the
example to capture real-life situations. Let’s assume
that Bill is a juvenile and has committed a crime. We
pose a question: “Should Bill be punished?”. Assum-
ing that you are given the statements below:
Bill has robbed someone, so he should be jailed.(α)
Bill is a juvenile, therefore he should not go to jail.(β)
In the above statement, if we assume that the sec-
ond statement is stronger, then we can say that β
defeats α. However, the above form of argumenta-
tion is insufficient in capturing real-life argumenta-
tion. Existing argumentation systems (Jennings et al.,
1998; Parsons et al., 1998; Sierra et al., 1997; Kraus
et al., 1998; Dung, 1995; Verheij, 1994; Pollock,
1991; Vreeswijk, 1997) separate the source of the ar-
gument from the argument when evaluating the de-
feat. We believed that the validity and strength of an
argument cannot be capture by the argument alone. It
is common in reality for one to evaluate the strength
39
Fon Chang C., Harvey P. and Ghose A. (2006).
SOURCE SENSITIVE ARGUMENTATION SYSTEM.
In Proceedings of the Eighth International Conference on Enterprise Information Systems - AIDSS, pages 39-46
DOI: 10.5220/0002464800390046
Copyright
c
SciTePress
and validity of arguments with respect to the provider
of the argument. Furthermore, we claim that under-
cut”(Pollock, 1970) (a form of attack) cannot be per-
formed without this meta information.
Consider the following modification of the above
example:
Tom: Bill has robbed someone, so he should be
jailed.(α)
Dick: Bill is a juvenile, therefore he should not go to
jail.(β)
Previously, we assumed that the second statement
is stronger and hence we said that β defeats α.If
we knew that Tom is a known liar it would further
strengthen the acceptability of this defeat. However,
if you were told or knew that Tom is a police officer
we would be less willing to accept the defeat.We
would most likely rule in favour of Tom.
We will now extend this example further. Let us as-
sume that Tom is a police officer, however over a pe-
riod of time allegations of wrongful arrest, brutality
and violation of arrest procedures against Tom were
made by an independent source. These allegations
would diminish Tom’s credibility. Now, if we were
asked to evaluate subsequent arguments between Tom
and Dick or to re-evaluate the defeat between α and
β the original defeat may no longer hold. From the
above examples, it is clear that the defeat relation can-
not simply be between arguments.
Proposals by (Carbogim, 2000) capture the affect
of the underlying knowledge base by arguments, how-
ever the proposal does not deal with changing defeat
relation. In this paper, we will show that by simply by
tagging arguments with sources, the resulting defeat
relation is more fluent and changes the dynamics of
argumentation systems.
2 FORMAL DEFINITION
As stated in (Prakken and Vreeswijk, 2002), any de-
feasible argumentation system consists of several im-
portant components: a language, a definition of argu-
ments, a definition of inconsistency/conflict/attack, a
defeat relation and finally, the overall status of the ar-
gument. We will now provide the components of our
system closely matching these key components.
For simplicity, we will take any finitely generated
propositional language L with the usual punctuation
signs, and logical connectives ¬ (not) and (im-
plies). For any set of wffs S ⊆Land any α ∈L,
S α means α is provable from premises S. For any
set of wffs S ⊆L,Cn
L
(S) = { α | S α }.
Definition 1 (Argument) An argument α is a triple
F, A, C where F, A and C denote the sets of facts,
assumptions and conclusion respectively.
Definition 2 (Well-founded Argument) An argument
α is a well-founded argument iff it satisfies the fol-
lowing conditions:
F , A, C ⊆L
F A =
F A C
Cn(F A C) ⊥
We will write F
α
, A
α
and C
α
to respectively de-
note the facts, assumptions and conclusions associ-
ated with an argument α. One can view a well-
founded argument as a premise-conclusion pair where
the premise is F, A and the conclusion is C. By al-
lowing assumptions, this generalise the arguments to
allow for representation of weak facts. Note that since
we are interested in rational agents, we have elim-
inated self-defeating (Prakken and Vreeswijk, 2002;
Pollock, 1991) arguments. If one was interested in
creating irrational agents or paradoxes, one can make
the appropriate changes by removing the last condi-
tion.
Definition 3 (Conflict) A pair of well-founded ar-
guments β and γ are said to be in conflict iff
Cn(C
β
C
γ
) ⊥
Definition 4 (Attack) For a pair of well-founded ar-
guments β and γ we say that β attacks γ iff:
1. β and γ are in conflict
2. Cn(F
β
F
γ
A
γ
) ⊥
There exists great variation of the definition for
conflict/attack in the defeasible argumentation liter-
ature, and are often defined a way that makes them
interchangeable. In our system, these notions are in-
stead assigned specific meanings.
Conflict represents an inconsistency between the
conclusion and is symmetrical. Attack represents an
inconsistency on the premise. It is our view that if
the conclusions are non-conflicting then there are no
issues to be resolved. Generally, if we are unaware
of a difference in opinion, we do not argue about it.
Later we will combine these two notions to define re-
buttal, assumption attack and undercut. Note that our
definitions will differ slightly from those described in
(Prakken and Vreeswijk, 2002).
Definition 5 (Tagged Arguments) Given a set of
unique identifiers I, we define A as a set of tagged
arguments of the form S, A where
S ∈Irepresents the tagged arguments’ source.
A is a set of well-founded arguments.
We will write S
φ
and A
φ
to respectively denote the
source and well-founded argument associated with a
tagged argument φ.
ICEIS 2006 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
40
Definition 6 (Credibility Function) Given a set of
unique identifiers I, we say C is a credibility function
if it maps all values of I into a some totally ordered
set.
The notion of credibility provides the agent with a
measure of strength belief per argument source. This
notion also provides a simplistic measure of trust. For
simplicity, we have define it as a function that maps a
set of unique identifiers into a total ordered set. How-
ever, one could define an arbitrary complex measure
taking into account of the context in which the argu-
ment is situated. For example, if the agent is a stock-
broker, then any arguments related to the share market
from this agent will be more credible than any agent
that is not a stock-broker. One could also define a
mapping to an arbiter set as long as there exist oper-
ators that are transitive and asymmetric defined over
the set.
Definition 7 (Defeat) Given a set of tagged argu-
ments A and the credibility function C, a relation
D ⊆A×Ais said to be a defeat relation on A.
We will write φDψ iff at least one of the following if
true:
A
φ
attacks A
ψ
and A
ψ
does not attack A
φ
A
φ
attacks A
ψ
and C(S
φ
) >C(S
ψ
)
A
φ
and A
ψ
are in conflict, neither A
φ
attacks A
ψ
nor A
ψ
attacks A
φ
and C(S
φ
) >C(S
ψ
)
The tagging of arguments allows us to uniquely
identify the argument source, and so we make use of
this in our definition of defeat. Also, note that our no-
tion of defeat is not defined as a global relation but
as a per-agent defeat relation and is determined by a
credibility function C (see later definition 8).
Our definition of defeat also encapsulates vari-
ous types of defeat. For example, (Prakken and
Vreeswijk, 2002) states that assumption attack oc-
curs when one argument proves what was assumed
unprovable by the first (in other words, when a con-
clusion of one argument attacks the assumption of
another). We will say that assumption attack occurs
when facts of one argument attacks the assumption of
another argument. This is captured by attack(φ, ψ)
¬attack(ψ,φ).
Similarly, (Prakken and Vreeswijk, 2002) states
that rebuttal occurs when the conclusion of one ar-
gument attacks the premise of another. We deviate
slightly and say that rebuttal occurs when facts of one
argument attack facts of another argument. This is
captured by attack(φ, ψ) C(S
φ
) >C(S
ψ
).
Finally, (Prakken and Vreeswijk, 2002) states that
undercut occurs when an argument attacks an infer-
ence rule used in another argument. In our sys-
tem undercut occurs when the conclusion of the
arguments contradict but neither the facts nor as-
sumptions of both arguments contradict, as this
implies that the contradiction occurs during infer-
ence. This notion is capture by conf lict(φ, ψ)
¬ (attack(φ, ψ) attack(ψ, φ)) C(S
φ
) >C(S
ψ
)
Definition 8 (Agent) Given a set of unique identifiers
I and a set of tagged arguments A, an agent is repre-
sented as a 4-tuple of the form I, A, C where
I ∈I.
A ⊆As.t. φ : φ ∈A,ifS
φ
= I then φ A
C is a credibility function. This represents the cred-
ibility (higher values are better) of other agents in
the system as evaluated by the agent.
Note that the credibility function is subject to revi-
sion by the agent during the execution of the system
as each agent adjusts it’s view of fellow agents. Note
also that the set of tagged arguments A is subjected
to change as individual agent discover new arguments
during the argumentation process. We do not require
the agent to know all arguments nor do we require
that the set of arguments be conflict-free. The notion
of conflict-free set is simply no two arguments in the
set defeat each other.
The uniqueness of this system is that there ex-
ists no global consensus on the credibility value on
each agent. This measure is recorded from individual
agent’s perspective and is stored by individual agents.
Given that there is no requirement for global consen-
sus on individual agent’s credibility, consensus on the
amount of adjustment on the credibility value is not
required either. However, for agents to be produc-
tive, we believed that there should be a consensus on
when and the kind of adjustment that should be per-
formed. A simple rule would be that if an observation
is made of an agent winning an argument, then that
agent’s credibility should be adjusted upwards and the
converse holds. One could also extend this to cap-
ture situation importance. For example, if an agent
is observed to have won an important argument, then
it’s credibility is revised upwards by a greater value
to that of an argument with less important. Similar
to human debates, this provides a notion of a “career
making win”. This also provides incentive for agents
to win the argument. We have provided directions in
which one could formula a true-to-life function, we
leave the function details to designers.
For convenience we will write D
ρ
to denote the de-
feat relation (from definition 7) as determined by the
A and C held by agent ρ.
Definition 9 (Stable) For an agent ρ, we say a set
S A is a stable set of arguments for that agent iff
it is a maximal (w.r.t. set inclusion) set of arguments
such that:
•∀ψ A
ρ
S, φ S that conflicts with ψ.
•∀φ A
ρ
, ψ S where ψD
ρ
φ.
SOURCE SENSITIVE ARGUMENTATION SYSTEM
41
A stable set provides a particular view which the
agent has adopted. To select a particular view is to
side with some set of agents in the system hence pro-
viding support to that group of agents, in essence
forming parties or coalitions. We will write S
ρ
to de-
note a stable set of arguments adopted by an agent ρ.
Definition 10 (Source Sensitive Argumentation Sys-
tem) A source sensitive argumentation system is de-
fined as:
SAS = Agt, A
where
Agt is a set of agents.
•Ais a set of tagged arguments.
Definition 11 (Consensus) Given a source sensitive
argumentation system SAS = Agt, A, we say a set
S⊆Ais a consensus iff for all agents ρ Agt, there
exist a state S
ρ
such that no argument in S
ρ
D
ρ
an
argument in S.
A consensus is a set of arguments that cannot be
defeated by any agent.
Definition 12 (G-consensus) Given a source sensi-
tive argumentation system SAS = Agt, A and
G Agt, we say a set S⊆Ais a G-consensus iff
for all agents ρ G, there exist a state S
ρ
such that
no argument in S
ρ
D
ρ
an argument in S.
We understand that it is generally very hard to reach
a consensus. G-consensus can be viewed as group
consensus where a predefined set of agents are con-
sulted when determine consensus. So if G is the com-
plete set of the agents in the system then G-consensus
is equal to consensus.
Definition 13 (N-acceptable) Given a source sensi-
tive argumentation system SAS = Agt, A, we say
a set S
N
⊆Ais a n-acceptable iff there exists some
G Agt such that |G| = n and S
N
is a G-consensus.
With n-acceptable, we simply denote the number of
agents that we will seek consensus and not any spec-
ify a group.
Definition 14 (Majority) Given a source sensitive ar-
gumentation system SAS = Agt, A, we say a
non-empty set S
M
⊆Ais a majority if it is
|Agt|
2
-
acceptable.
Majority is simply a set of arguments that more
than half the agent population are happy with.
3 PROCEDURE
Informally, an agent’s belief consists of a set of ar-
guments, and a credibility ordering derived from the
credibility function. Each agent will revise their
credibility function as the argumentation process pro-
gresses, and so we assume that each agent has a cred-
ibility revision operator. We leave the details of the
revision operator to the individual agent’s designer,
though we note that it is possible for an agent to as-
sign other agents higher credibility than itself.
For the purpose of our discussion, we propose the
use of the following basic revision rules:
If an agent is observed to have lost an argument, its
credibility is reduced.
If an agent is observed to have won an argument,
its credibility is increased.
We revise only when a new observation is made,
and so do not repeatedly reduce/increase the credi-
bility of an agent.
We note that the above general rules still permit
agents to participate in more than one argumentation
session.
We will also assume that there exists a communi-
cation protocol used by agents to transmit and receive
arguments. For simplicity, we will assume that the ar-
guments transmitted are transparent to all agents. As
a simplistic protocol to the argument exchange, we
suggest that agents transmit their argument in half-
duplex mode and synchronously, hence taking turns
in presenting their argument or counter-argument. At
the end of each round of exchange, individual agents
will determine the defeat of relationship between ar-
guments. The agents will then present their outcome
to so that the consensus and majority can be com-
puted. Following this simple procedure we will now
provide an example of multi-agent dialogue that is in
line with our argumentation system.
We introduce three agents to frame our example;
Tom, Dick and Harry. Tom (a police officer) holds
the argument α, which is ‘Bill has robbed someone,
so he should be jailed’. Dick will hold the argument
β, which is ‘Bill is a juvenile, therefore he should not
go to jail’. Harry holds no argument yet, but will par-
ticipate in the argumentation process. Therefore ini-
tially:
Beliefs At Start
Tom C(Tom) >C(Dick) >C(Harry)
{α} is the outcome
Dick C(Dick) >C(Tom) >C(Harry)
{β} is the outcome
Harry C(Harry) >C(Tom) >C(Dick)
{} is the outcome
At this initial point there is no consensus between
the agents, and so S = . Similarly, there is no ma-
jority, and so S
M
= . Note that Harry sees Tom (the
police officer) as more credible than Dick.
ICEIS 2006 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
42
We assume now that Tom and Dick communicate
their arguments to the other agents, and so all agents
know the arguments Tom and Dick. This
gives the following situation:
Beliefs After Communicating Arguments
Tom C(Tom) >C(Dick) >C(Harry)
Tom defeats Dick
{α} is the outcome
Dick C(Dick) >C(Harry) >C(Tom)
Dick defeats Tom
{β} is the outcome
Harry C(Harry) >C(Tom) >C(Dick)
Tom defeats Dick
{α} is the outcome
There remains no consensus between the agents,
and so S = . However, there is now a majority,
and so S
M
= {Tom}.
We now consider what is to happen if the reputa-
tion of Tom was tarnished. Formally, this would mean
that Harry changes his assessment of the credibility of
each agent so that C(Harry) >C(Dick) >C(Tom).
Note that this could happen through some other argu-
mentation process involving Dick and Harry.
Beliefs After Decreasing Tom’s Credibility
Tom C(Tom) >C(Dick) >C(Harry)
Tom defeats Dick
{α} is the outcome
Dick C(Dick) >C(Harry) >C(Tom)
Dick defeats Tom
{β} is the outcome
Harry C(Harry) >C(Dick) >C(Tom)
Dick defeats Tom
{β} is the outcome
Finally, we consider the situation where Tom him-
self is ‘ashamed’ and lowers his own credibility be-
low that of Dick’s. As shown below, a consensus is
established, with all agents agreeing on β. From this
example we can see that consensus will only be es-
tablished if there was no disagreement, or one agent
‘gives ground’ and lowers their own credibility. We
will explore this more completely below.
Beliefs After Tom ‘Gives Ground’
Tom C(Dick) >C(Tom) >C(Harry)
Dick defeats Tom
{β} is the outcome
Dick C(Dick) >C(Harry) >C(Tom)
Dick defeats Tom
{β} is the outcome
Harry C(Harry) >C(Dick) >C(Tom)
Dick defeats Tom
{β} is the outcome
3.1 Cycles
There exists two forms of cycles. Firstly, cycles on
the defeat relation between arguments. This is caused
by agents ranking themselves as definitive source of
credibility. Below is an example of a cycle.
Cycle in Defeat Relation
Tom Tom defeats Dick
C(Tom) >C(Dick))
{α} is the outcome
Dick Dick defeats Tom
C(Dick) >C(Tom)
{β} is the outcome
This form of cycle results in no consensus and, as
in this case, may result in no majority. This form of
cycle is not a serious issue as it indicates to the oper-
ators that no agreement can be reached, but does not
hinder the termination of the program. Note also that
we do expect such cycles to occur, as they result from
underlying cycles in the attacks relation.
The second form of cycle occurs during the com-
putation of consensus and majority. This is caused by
agents inability to commit to a particular credibility
ordering. This is directly attributed to a bad revision
operator. We will term this type of cycle oscillation.
For the purpose of this discussion, we will assume
that the revision operators defined by designers are
well behaved. With a well-behaved revision opera-
tor, cycles of random raising and lowering credibility
never occur. Furthermore, after winning an argument
an agent is more likely to win future arguments. This
self-supporting nature further ensures that cycles of
raising and lowering credibility never occur. This en-
sures that a fix-point is achieved, and hence a termi-
nation condition for the program exists.
SOURCE SENSITIVE ARGUMENTATION SYSTEM
43
4 RESULTS AND BEHAVIOURAL
IMPACTS
This section will outline the contributions made by
this system. Agent behaviours resulting from this sys-
tem falls into the categories of: agent reasoning, agent
autonomy and group behaviours.
4.1 Agent Reasoning
In (Jennings et al., 1998; Sierra et al., 1997; Parsons
et al., 1998), the authors proposed the use of argu-
mentation to assist in agent negotiation. Information
that are naturally embedded in arguments are use to
determine or eliminate infeasible proposals. With the
additional notion of credibility, our system further as-
sist in this process. With an embedded defeat relation,
agents are in the position to evaluate their own argu-
ments before communicating it to other agents. This
creates the notion of a minimal winning argument. A
minimal winning argument increases its’ strength by
minimising the possible attack on it. For example, by
reducing the number of assumptions in an argument,
minimises the possible assumption attack. The notion
of a minimal argument allows for the arguments to
be ranked according to strength and provides an or-
der in which arguments are to be presented to fellow
agents. The introduction of credibility also introduces
a notion of self-preservation. If an agent wishes to be
in a position to win future arguments, then it must
keep its’ credibility value high. The self-preservation
notion forces the agent into evaluate arguments more
carefully so as not to randomly propose weak argu-
ments. Essentially agents be will force into a position
in which they only “fight in battles that they believe
they can win”. This improves the efficiency of the
argumentation process.
Additionally, agents are now equipped with addi-
tional rationale. The use of credibility provides an-
other means to support their decisions.
4.2 Agent Autonomy
The lack of a global defeat relation and consensus on
the credibility provides this system the ability to dis-
tribute and scale. This feature do away with the need
for a central repository/location where conflicts are
resolved. This system also allows for arguments to
propagate hence conflicts can be isolated and resolved
independently. This approach also removes the notion
of a static top argument.
4.3 Agent Group Behaviour
Group decision making and coalition formation have
received huge amount of interest recently. Our sys-
tem provides agents with the ability to support fellow
agents. This is archived through the consensus and
majority operation. If an agent’s argument is deem
strong, it would persuade fellow agents of its’ view.
This exercise causes parties to form and hence coali-
tion to form leading to dynamic coalition formation.
5 RELATED WORKS
This section will outline the related works. The re-
lated works falls into three categories: defeasible
argumentation, argumentation-based negotiation and
distributed constraints satisfaction.
5.1 Defeasible Argumentation
Although the works such as (Dung, 1995; Prakken,
1993; Bondarenko et al., 1997; Dung and Son, 2001;
Dung, 1995; Prakken, 1993; Vreeswijk, 1997) fo-
cuses on representation and logical reasoning, we
have shown that by introducing the notion of sources
and credibility as well as a corresponding notion of
defeat, we are able to provide a mapping for a theo-
retical to a practical system. These systems provide
a strong basis for our proposal. We have modelled
our system around these collection of defeasible argu-
mentation systems. We feel that our work will com-
plement those advances already achieved.
Additionally, recent work (Prakken, 2001; Car-
bogim, 2000; Brewka, 2001) in defeasible argumen-
tation have focused on the notion of dynamic argu-
mentation. (Prakken, 2001) focus on the fairness
and soundness of dynamic argumentation protocol.
(Carbogim, 2000) focused on addressing issues of
associated to change in the underlying knowledge
base caused by arguments. (Brewka, 2001) deals
with meta-level argumentation, providing undercut
via a notion of preference on which defeat rule holds.
Components of these work have similarities to ours,
however the notion of sources and credibility forcing
a change of preference on the defeat rules are not in-
vestigated. In fact these works do not have explicit
notion of sources and credibility.
5.2 Argumentation-Based
Negotiation
The exchange of arguments and counterarguments
has also been studied in the context of multi-agent in-
teraction. In (Kraus et al., 1998) and (Parsons et al.,
1998) have studied argumentation as a component of
negotiation protocols, where arguments for an offer
should persuade the other party to accept the offer.
Generally, these works focus on the negotiation pro-
tocol and agent behaviours, leaving the representation
ICEIS 2006 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
44
and internal reasoning to the designer. In (Kraus et al.,
1998), a range of agents and behaviours were pre-
scribed. We feel that our work further enhance the
internal reasoning of these agents. Although some
notion of sources and credibility may have existed in
(Jennings et al., 1998; Sierra et al., 1997), these no-
tions were not explicitly utilised when evaluating ar-
guments.
5.3 Distributed Constraint
Satisfaction
Recent work in distributed constraint satisfaction al-
gorithms (Harvey et al., 2005b; Harvey et al., 2005a;
Harvey et al., 2006) is built upon the theoretical un-
derpinnings described in this paper. Support-Based
Distributed Search (SBDS) is a distributed constraint
satisfaction algorithm in which agents communicate
via arguments, maintaining a simple notion of credi-
bility between agents.
The argument structure of SBDS is domain spe-
cific, permitting two categories of arguments. As it
deviates from the argumentation structure representa-
tion in this paper (with distinction between facts and
assumptions). Below is a loose description of the ar-
gument structure below in SBDS.
Definition 15 (SBDS Argument) An SBDS-
argument is a pair Prem,Con, and belongs to one
of the following two categories:
1. isgoods (variable-value assignment proposals)
Prem - an ordered sequence of variable-value as-
signments
Con - the variable-value assignment for the agent
stating the argument
2. nogoods (variable-value assignment rejections)
Prem - a set of variable-value assignments which
are not permitted
Con - exact copy of the premise
As the argument structure of SBDS differ from
the monotonic logic example given in this paper, a
domain-specific conflict and attacks relation was de-
fined. The spirit of these relation remains the same
as those provided in our work. This further strength-
ens our claim that our proposal complements and pro-
vides assistance to solving problems in a whole range
of different domains.
6 CONCLUSION
In most argumentation systems, the source of the ar-
gument plays a minimal role. In this paper, we have
introduced the notion of sources and credibility. We
have shown that by simply tagging arguments with
sources, the resulting defeat relation is more fluent
and the dynamics of argumentation systems changes.
We have shown that ignoring this important attribute
of human argumentation reduces the capabilities of
current argumentation systems. This paper focuses
on the importance of information source in argumen-
tation and extending this to the notion of credibility.
The notion of credibility plays an important role in
the agent decision making process during argumenta-
tion. We have also shown that the system is capable of
emulating feature of human argumentation that was
not captured in existing systems. Finally, we have
shown that this system is not purely theoretical but
can also be applied into the practical domain such as
argumentation-based negotiation and distributed con-
straint satisfaction.
6.1 Future Works
Throughout this paper, we have indicated room for
improvements. We point to improvements that can be
made on mapping sources to credibility by augment-
ing the function to consider context. We also like to
point out that the current mapping may not be satis-
factory to some audience. Suggestion of generalising
the mapping function via the use of semi-ring struc-
ture, combining two orthogonal metric (one measur-
ing strength of the argument, the other measuring the
credibility) are currently under investigation. By us-
ing this approach, we are in the position to provide
a notion of graded defeat. This modification would
provide a method to infer the global credibility and
defeat relation from a given set of agents.
REFERENCES
Bondarenko, A., Dung, P. M., Kowalski, R. A., and Toni,
F. (1997). An abstract, argumentation-theoretic ap-
proach to default reasoning. Artificial Intelligence,
93:63–101.
Brewka, G. (2001). Dynamic argument systems: A formal
model of argumentation processes based on situation
calculus. Journal of Logic and Computation, 11.
Carbogim, D. V. (2000). Dynamics in Formal Argumenta-
tion. PhD thesis, University of Edinburgh. College of
Science and Engineering. School of Informatics.
Dung, P. M. (1995). An argumentation-theoretic founda-
tions for logic programming. Journal of Logic Pro-
gramming, 22(2):151–171.
Dung, P. M. and Son, T. C. (2001). An argument-based
approach to reasoning with specificity. Artificial In-
telligence, 133(1-2):35–85.
Harvey, P., Chang, C. F., and Ghose, A. (2005a). Practi-
cal application of support based distributed search. In
Proceedings of the 17th IEEE International Confer-
ence on Tools with AI.
SOURCE SENSITIVE ARGUMENTATION SYSTEM
45
Harvey, P., Chang, C. F., and Ghose, A. (2005b).
Support-based distributed search. In Meisels,
A., editor, Proceedings of the 6th International
Workshop on Distributed Constraint Reasoning.
http://www.dsl.uow.edu.au/people/harvey/dcr05.pdf.
Harvey, P., Chang, C. F., and Ghose, A. (2006). Support-
based distributed search. In Proceedings of the 5th
International Joint Conference on Autonomous Agents
& Multi Agent Systems.
Jennings, N. R., Parsons, S., Noriega, P., and Sierra, C.
(1998). On argumentation-based negotiation. In
Proceedings of the International Workshop on Multi-
Agent Systems, Boston, USA.
Kraus, S., Sycara, K. P., and Evenchik, A. (1998). Reaching
agreements through argumentation: A logical model
and implementation. Artificial Intelligence, 104(1-
2):1–69.
Parsons, S., Sierra, C., and Jennings, N. R. (1998). Agents
that reason and negotiate by arguing. Journal of Logic
and Computation, 8(3):261–292.
Pollock, J. L. (1970). The structure of epistemic justifica-
tion. American Philosophical Quarterly, monograph
series 4:62–78.
Pollock, J. L. (1991). Self-defeating arguments. Minds and
Machines, 1(4):367–392.
Prakken, H. (1993). An argumentation framework in
default logic. Annals of Mathematics and Ar-
tificial Intelligence, 9(1-2):93–132. Bibsource =
DBLP,http://dblp.uni-trier.de.
Prakken, H. (2001). Relating protocols for dynamic dispute
with logics for defeasible argumentation. Synthese,
127(1):187–219.
Prakken, H. and Vreeswijk, G. (2002). Logics for Defeasi-
ble Argumentation, volume 4 of Handbook of Philo-
sophical Logic, pages 218–319. Kluwer Academic
Publishers, second edition edition.
Sierra, C., Jennings, N. R., Noriega, P., and Parsons, S.
(1997). A framework for argumentation-based negoti-
ation. In Singh, M. P., Rao, A. S., and Wooldridge, M.,
editors, Proceedings of the 4th International Work-
shop on Agent Theories, Architectures, and Lan-
guages, pages 177–192.
Verheij, B. H. (1994). Reason based logic and legal knowl-
edge representation. In Carr, I. and Narayanan, A.,
editors, Proceedings of the 4th National Conference
on Law, Computers and Artificial Intelligence, pages
154–165. University of Exeter.
Vreeswijk, G. (1997). Abstract argumentation systems. Ar-
tificial Intelligence, 90(1-2):225–279.
ICEIS 2006 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
46