TOWARDS A FORMAL MODEL OF KNOWLEDGE
ACQUISITION VIA COOEPERATIVE DIALOGUE
Asma Moubaiddin
Department of Linguistics and Phonetics, Faculty of Arts, The university of Jordan, Amman, Jordan
Nadim Obeid
Department of Computer Information Systems, King Abdullah II School for Information Technology
The university of Jordan, Amman, Jordan
Keywords: Dialogue, Argumentation, Nonmonotonic Logic, Knowledge Acquisition/Learning, Formalization.
Abstract: We aim, in this paper, to make a a first step towards developing a model of knowledge acquisition/learning
via cooperative dialogue. A key idea in the model is the concept of integrating exchanged information, via
dialogue, within an agent's theory. The process is nonmonotonic. Dialogue is a structured process and the
structure is relative to what an agent knows about the world or a domain of discourse. We employ a
nonmonotonic logic system, NML3, which formalizes some aspects of revisable reasoning, to capture an
agent's knowledge and reasoning. We will present a formalization of some basic dialogue moves and the
protocols of various types of dialogue. We will show how arguments, proofs, some dialogue moves and
reasoning may be carried out within NML3.
1 INTRODUCTION
We aim, in this paper, to make a a first step towards
developing a model of Knowledge Acquisition
(KA)/learning via cooperative dialogue. A key idea
in the model is the concept of integration; an agent
learns a collection of propositions concerning some
situation by integrating it within its knowledge about
that situation. Agents may switch roles.
We assume that each of the participants in a
dialogue has a certain well-define role, determined
by the type of dialogue, the goal of that type of
dialogue, and the rules for making moves in it. We
shall, following (Frans, Van Emeren and
Grootendorst, 1992; Walton, 1992; Walton and
Krabbe, 1995), adopt a model of dialogue that is
based on a commitment. Agents are computational
entities that have knowledge and possess the ability
to acquire and manipulate (modify, derive) through
reasoning their knowledge.
We shall assume that the agents are cooperative,
abide by the rationality rules, e.g. rules of relevance
(cf. Grice 1975) and rational in the sense that they
fulfil their commitments and obligations in a way
that truthfully reflects their beliefs and intentions.
The types of dialogue we will be considering in
this paper are: information-seeking, inquiry and
persuasion. A dialogue is initiated through
questioning. An answer to a question, about a
particular situation, may confirm what the agent
accepts/knows or it may somehow require a process
of belief revision. This suggests that the process of
incorporation of new information into an agent
theory be modelled nonmonotonically. We employ
for capturing an agent's knowledge and reasoning a
three-valued based nonmonotonic logic, NML3,
which formalizes some aspects of revisable
reasoning and is amenable to implementation.
Within NML3, we present a formalization of some
basic dialogue moves and the rules of protocols of
some types of dialogue. The rules of a protocol are
nonmonotonic in the sense that the set of
propositions to which an agent is committed and the
validity of moves vary from one move to another.
We will show how proofs, some dialogue moves
and reasoning may be carried out within NML3.
We shall begin, in section 2, with a presentation
of NML3 employed to capture an agent's
knowledge and reasoning. In section 3 we present
the types of dialogue and in section 4 we present a
formalization of some dialogue moves, rules of
protocols of some types of dialogue and the process
182
Moubaiddin A. and Obeid N. (2007).
TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE.
In Proceedings of the Ninth International Conference on Enterprise Information Systems - HCI, pages 182-189
DOI: 10.5220/0002350801820189
Copyright
c
SciTePress
of integration. We show in section 5 how proofs and
reasoning are carried out in NML3. Section 6 is
concerned with learning and dialogue.
2 REASONING WITH
INCOMPETE INFORMATION
The agent’s partial knowledge and reasoning
capability are expressed in a non-monotonic Logic,
NML3. The language L
NML3
is that of Kleene’s
three-valued logic extended with the modal operator
M (Epistemic Possibility). Staring with T (true), F
(false) and a set of atoms: p, q, r, …, more
complicated Well-Formed Formulae are formed via
closure under ~ (negation), & (conjunction), V
(disjunction) and (implication). That is, if A and
B are WFF, then so are ~A, A&B, AVB, AB and
MA.. In NML3, L is the dual of M, LA ~M~A.
(Obeid, 1996) defines a truth-functional implication
that behaves exactly like the material implication
of classical logic, as follows:
A B = M(~A&B)V~AVB.
Non-monotonic reasoning is represented via the
epistemic possibility operator M. Using M, we may
define the operators U (undefined), D (defined) and
¬ (classical negation) as follows:
UA MA&M~A
DA ~UA
¬A DA & ~A
Formal Semantics
Definition 2.1 A model structure for L
NML3
is
Μ
=
<W, R, g> where W is a non-empty set of
information states, R is a binary relations on W and
g is a truth assignment function for atomic WFF. R
can be interpreted as epistemic possible extension
between states. Given w, w
1
are members of W, we
shall write w R w
1
to mean that the information state
w
1
is an epistemic possible extension of the
information state w.
We employ the notation
Μ
,w =g A (resp.
Μ
,w
=g A) to mean that A is accepted as true (resp.
false) at w in
Μ
with respect to g and
Μ
=g A (resp.
Μ
=g A) to mean that A is accepted as true (resp.
false) at every w in
Μ
with respect to g. For
convenience, reference to g will be omitted except
when confusion may arise.
Definition 2.2 Let A, B be wffs then, the truth "="
and the falsity "=" notions are recursively defined
as follows:
(i)
Μ
,w = T
(ii)
Μ
,w = p iff g(w,p) = true for atomic p
(iii)
Μ
,w = A&B iff
Μ
,w = A and
Μ
,w = B
(iv)
Μ
,w = ~A iff
Μ
,w = A
(v)
Μ
,w = MA iff (w
1
W)(wRw
1
and
Μ
,w
1
⎟≠ ~A)
(i')
Μ
,w = F
(ii’)
Μ
,w = p iff g(w,p) = false for atomic p
(iii')
Μ
,w = A&B iff
Μ
,w = A or
Μ
,w = B
(iv')
Μ
,w = ~A iff
Μ
,w = A
(v')
Μ
,w = MA iff (w
1
W)(if wRw
1
then
Μ
,w
1
= ~A)
An Axiomatic System
NML3 is the smallest set of sentences of L
NML3
which is closed under the following axiom schema
and inference rules. We shall write
NML3
to mean
that A is a theorem of NML3.
Axiom Schema
(a1) A (B A&B)
(a2) A (B A)
(a3) A&B A (a3') A&B B
(a4) (A B) [(B C) (A C)]
(a5) ~~A A (i.e., ~~A A and ~~A A)
(a6) ~(A&B) (~A V ~B)
(a7) A MA
Inference Rules
Modus Ponens (MP) for together with:
(R1) From ~AVB infer ~MAVB
(R2) From A B infer MA MB
(R3) From the ability to infer ~A infer MA
NML3 is sound and complete. One of the
advantages of NML is that defaults of Reiter's
defaults logic (Reiter 1980) can be represented as
sentences in the object language in the system. It can
be shown that there is a one-to-one correspondence
between extensions of a default theory and
appropriate minimal information states which
provide the semantic account (models) of the system
NML3. For more details (cf Obeid, 2005).
3 DIALOGUE
Dialogue is an exchange of messages between
two(or more) participants. Every dialogue has a goal
and requires cooperation between the participants to
fulfil its goal. This means that each participant has
an commitment to work towards fulfilling its own
goal and a commitment to cooperate with the other
participant’s attempt to realize their own goals.
(Walton and Krabbe, 1995) provides a typology
of dialogue types between two agents. For each type
of dialogue, they formulate an initial situation, a
primary goal, and a set of rules. These constitute a
model, representing the ideal way reasonable,
TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE
183
cooperative agents participate in the type of dialogue
in question. It is important to note that in the course
of communication, there often occurs a shift from
one type of dialogue to another. Dialogue
embedding takes place when the embedded dialogue
is functionally related to the first one. For instance, a
persuasion dialogue may require an information-
seeking sub-dialogue.
Information seeking (IS): In IS dialogue, one
participant extracts information from another
provided that it can be provided. An IS dialogue is
initiated when a participant lacks some information.
There may not be a need for proof in IS dialogue and
it is not necessary to establish a collective belief.
Inquiry:
The basic goal of inquiry is information
growth so that an agreement could be reached about
a conclusive answer of some question. The goal is
reached by a incremental process of argumentation
that employs established facts in order to prove
conclusions beyond a reasonable doubt. In short, the
aim is to acquire more reliable knowledge to the
satisfaction of all involved. Inquiry is a cooperative
type of dialogue and correct logic proofs are
essential. It is the most relevant type of dialogue in
collective decision making and learning. Inquiry
may require persuasion and vice-versa.
Persuasion:
The goal of persuasion dialogue is for
one agent to persuade the other participant(s) of its
point of view and the method employed is to prove
the adopted thesis. The initial reason for starting a
persuasion dialogue is a conflict of opinion between
two or more agents and the collective goal is to
resolve the issue. Argument here is based on the
concessions of the other participant. Proofs can be of
two kinds: (1) to infer a proposition from the other
participant’s concessions; and (2) by introducing
new premises probably supported by evidence.
Clearly, a process of knowledge update/belief
revision takes place here.
4 DIALOGUE SYSTEM
A dialogue system is a formal model that aims to
represent how a formal dialogue should proceed. It
defines the rules of the dialogue.
The topic language, L
Topic
, is a logical language
which consists of propositions that are the topics of
the dialogue. L
Topic
is associated with a logic Σ (in
our case NML3) which determines the inference
rules, the defeat relations between the arguments and
defines the construction of proper arguments and
dialogue moves. The choice of Σ (whether
monotonic or nonmonotonic) has an impact on the
entire dialogue system.
The communication language, L
COM
, specifies
the locutions which the participants are able to make
in the dialogue. One of the most influential agent
communication languages is KQML (Finin et al.
1994). Our proposed system uses a KQML-type of
language. We will assume that every agent
understands this language and that all agents have
access to common argument ontology, so that the
semantics of a message is the same for all agents.
4.1 Some Basic Dialogue Moves
A dialogue, D, is a sequence M
1
, . . . ,M
n
. A move is
a quadruple as follows:
M
i
= <ID(M
i
),PL(M
i
),LOC(M
i
),TARGET(M
i
)>
where
(1) ID(M
i
), the identifier of M
i
, is i (i.e., indicating
that M
i
is the i
th
element of the sequence in the
dialogue).
(2) PL(M
i
) is the player of the move.
(3) LOC(M
i
) is the locution of the move from L
Topic
.
(4) TARGET(M
i
) is the target of the move.
If M
i
is a reply to a message in M
j
where
j <i then TARGET(M
i
) = M
j
.
Every dialogue system specifies its own set of
locutions. There are, however, several basic types of
communication primitives. Among these are:
Assert A: an agent g states A.
Retract A: this move is a countermove to Assert A.
In NML3, Retract A by g does not commit g to
Assert ¬A.
Accept A: An agent g accepts/concedes a
proposition A given by another agent.
Reject A: a countermove to Accept A.
Reject A by g does not commit g to Accept ¬A.
Question A: An agent g questions/asks from
another, g1, for information about A (e.g.,
whether A is derivable from its
theory, i.e, whether Σ( g
1
) ⏐− A.
Chanllenge A: This move is made by one agent g
for another g1, to explicitly state a proof (an
argument supporting) for A.
4.2 Knowledge Update/Process of
Integration
The way a dialogue affects an agent's knowledge or
Knowledge Base (KB) depends on how the agent
reacts to exchanged information.
Accepting a proposition A by an agent g1 entails
that A is not inconsistent with its KB, KB(g
1
). We
may distinguish the following cases:
(I) There is only one extension of KB(g
1
)and either
ICEIS 2007 - International Conference on Enterprise Information Systems
184
(a) A is derivable from KB(g
1
) using g
1
's logic
or
(b) ¬A is not derivable from KB(g
1
).
(II) There are many extensions of KB(g
1
) and A is
derivable in one whereas ¬A is derivable in another.
Accepting A would be a commitment to the
extension(s) of KB(g
1
) where A holds.
Rejecting by g1 a proposition A asserted by g
1
may entail:
(a) ¬A is derivable from KB(g
1
)
or
(b2) neither A nor ¬A is derivable from KB(g1).
A case of rejection without justification.
It is important to add that the issue of rejection
and/or acceptance, by an agent, say g1, of
propositions asserted by another agent, g2, is quite
complicated by various rules such as (temporal)
persistence, denying accepting previous assertions
and citing contradictory support for a proposition
and its negation.
4.3 Update Rules of Dialogue Moves
Let COMIT
i
(g) represent the commitment set of
agent g at a move that has an identifier i. COMIT
i
(g)
is a set of propositions from L
Topic
which the agent g
is committed to (e.g., prepared to hold on to) at that
point in the dialogue. During the dialogue,
propositions are added to and/or deleted from the
commitment set.
Given such background, we give the update rules
that specify how commitment stores are modified by
the move (cf. Maudet and Evrard 1998).
Let j < i, M
j
a move played by g1, and M is a
move by g as a reply to M
j
, then
(1) M = <i, g, Assert A, M
j
>
COMIT
i
(g)=COMIT
i-1
(g){A} and
COMIT
i
(g
1
) = COMIT
i-1
(g
1
).
This step adds A to COMIT
i-1
(g) to result in
COMIT
i
(g) and
g can offer a proof of A.
(2) M = <i, g, Retract A, M
j
>
COMIT
i
(g)=COMIT
i-1
(g) - {A} and
COMIT
i
(g
1
) = COMIT
i-1
(g
1
).
This step deletes A from COMIT
i-1
(g) to result in
COMIT
i
(g), i.e., A is deleted from g's theory.
(3) M = <i, g, Accept A, M
j
)
COMIT
i
(g) = COMIT
i-1
(g) {A} and
COMIT
i
(g
1
) = COMIT
i-1
(g
1
).
Agent g accepts A from g
1.
The impact of this step is
that A will be added to COMIT
i-1
(g) to yield
COMIT
i
(g) This can be possible if the locution of
the message M
j
is Assert A. The impact of this
message will be an update of g’s theory with A.
(4) M = <i, g, Reject A, M
j
)
COMIT
i
(g) = COMIT
i-1
(g) – {A} and
COMIT
i
(g
1
) = COMIT
i-1
(g
1
).
Agent g rejects A from g
1.
This can only be possible
if the locution of the message M
j
is Assert A. The
impact of this message could either be no change to
g’s theory as it is in contradiction with A, in which
case, COMIT
i
(g)=COMIT
i-1
(g)–{A} = COMIT
i-1
(g))
or an update of g’s theory by retracting A.
(5) M = <i, g, Question A, M
j
)
This move does not alter either of COMIT
i
(g) or
COMIT
i
(g
1
). In this case g is asking from g1 for
information about A (e.g., whether A is derivable
from its theory, i.e, whether Σ( g
1
) ⏐− A.
(6) M = <i, g, Chanllenge A, M
j
)
This move does not alter either of COMIT
i
(g) or
COMIT
i
(g
1
). In this move g is forcing g1, to
explicitly state a proof (an argument supporting) A.
It is important to note that a participant in a
dialogue must keep track of the conversational
record between them and record what has been
accepted, challenged or rejected.
4.4 Rules of Protocols of Different
Types of Dialogue
Information-Seeking: If the information seeker is
g and the other agent is g
1.
(1) g makes a Question move such as M
i
= <i, g,
Question A, M
l
) where M
l
is a move made
earlier by g
1
and l > i.
(2) g
1
replies with the move M
k
where the identifier
is k and its target M
i
, where k > I, as follows:
(i) M
k
= <k, g1, Assert A, M
i
> or
(ii) M
k
= <k, g1, Assert ¬A, M
i
> or
(iii) M
k
= <k, g1, Assert UA, M
i
>.
UA means that for g
1
the truth value of A is
undefined.
(3) g either accepts g
1
response using an Assert
move or challenges it with a Challenge move.
UA initiates an inquiry sub-dialogue between the
agents or the information-seeking dialogue is
terminated.
(4) g
1
replies to a Challenge move with a proof
using a move M
r
= <r, g1, Assert S, M
r
> where
S is a proof of A in Σ( g
1
).
(5) Go to step (3) for each sentence in S.
Inquiry. The following is an inquiry-protocol about
a proposition A involving g and g
1
.
(1) g seeks a support/proof for A. It begins with an
Assert move that asserts B A or asserts B
TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE
185
A, for some sentence B or a move that asserts
UA.
(2) g
1
either accepts B A or accepts B A using
an Accept move or challenges either of BA
and B A as appropriate with a Challenge
move.
(3) g replies to a challenge with Assert move that
provide a proof P in Σ(g) of the last proposition
challenged by g
1
.
(4) Go to step (2) for every proposition C P. That
is, substitute C for B A or B A.
(5) g
1
seeks a support/proof for B, i.e., it replies
with an Assert move that asserts E B or
asserts E B, for some sentence E or a move
that asserts UB.
(6) If COMIT(g) COMIT(g
1
) |- A then
the dialogue terminates successfully.
(7) The agents reverse roles and the appropriate
agent seeks a support/proof for E (step 5).
Persuasion. The agent g is trying to persuade g
1
to
accept A.
(1) g begins with a move that asserts A.
(2) g
1
replies with a move that
(i) accepts A or
(ii) asserts ¬A or
(iii) challenges A.
(3) two possibilities:
(a) If the answer of g
1
in the previous step is
(ii), then goto to step (2) with the roles of
the agents reversed and ¬A in place of A..
(b) If the answer of g
1
in the previous step is
(iii) (challenge), then
(α) g should reply with a move that
provide/asserts a proof P of A in Σ(g)
(β) go to step (2) for every for every
proposition C P.
5 ARGUMENTATION AND
PROOF IN NML3
It is clear from Section 4 that arguments have an
essential role to play in situations of conflict. They
can be used by an agent to increase the degree of
compatibility between its knowledge/beliefs and
those of other agents; one agent can persuade
another to adopt one or more propositions that it
accepts by presenting proofs/support for those
propositions (cf. Reed et al. 1997). In Artificial
Intelligence (AI), It is used in different ways: (1) to
structure knowledge where the aim is to determine
how utterances form arguments and how arguments
can be decomposed (cf. Toulmin, 1958); (2) to
model dialectical reasoning and deal with argument
construction (cf. Dung, 1995). It is important
to
present an argument in such a way so that it appeals to the
other participant knowledge. It allows an agent to
critically
question the validity of information
presented by another participant, explore multiple
perspectives and/or get involved in belief revision
processes.
5.1 Argumentation Framework
An Argumentation Framework (AF) system should
capture and represent the constituents of arguments
(e.g., the propositions which are taken into
consideration). These may include facts, definition,
rules, regulations, theories, assumptions and
defaults. They can be represented as formulae or sets
of formulae. It should also capture the interactions
and reactions between arguments and constituents of
arguments such as undercutting. Furthermore, some
notion of preference over arguments may be needed
in order to decide between conflicting arguments.
Definition 5.1.. Let Σ be a logical system. An
argument in Σ is a triple P = <S, A> where
S is a set of Well-Formed Formulae (WFF) and
A is a WFF of the language of Σ such that
(1) S is consistent
(2) S |-
Σ
A (A follows from S in Σ)
(3) S is minimal. No proper subset of S satisfies
(1) and (2) exists.
An argument in a logical system Σ is simply a proof
in Σ. S may need to be ordered. Thus, minimality in
condition (3) may not necessarily be set-theoretic.
S is called the support of P and A is its
conclusion. We shall use Support(P) (resp. Conc(P))
to denote that S is a support of P (resp. A is a
conclusion of P).
If the logical system Σ contains defeasible
implications/rules, then it would be worthwhile
distinguishing between a defeasible argument and a
non-defeasible/classical argument.
Definition 5.2 A defeasible argument is a proof P =
<S, A> where S contains some defeasible
implications. A non-defeasible/classical argument is
a proof that does not contains any defeasible
implication(s) or rely on any un-discharged
assumptions.
It is important to note that in a
defeasible/nonmonotonic theory, an agent could
provide a argument for both a proposition and its
negation, i.e., the theory of the agent may have
multiple extension (cf. Reiter (1980)). Thus, the
need for a notion of undercutting.
Definition 5.3 Let P1 and P2 be two argument in Σ.
Then Undercut(P1, P2) iff ( B Support(P2) such
ICEIS 2007 - International Conference on Enterprise Information Systems
186
that B ¬Conc(P1) where ” ” is the equivalence
of classical logic.
P2 undercuts P1 if, and only if, P2 has a formula
that is the negation of the conclusion of P1.
Propositions in agents theories may need to be
ordered to reflect some preference between
propositions needed to choose between conflicting
arguments. Such order could reflect the degree of
belief or truth in the proposition or some other
measure of preference.
5.2 Proof Method For NML3
One of the essential features of the proof system is
that it allows free and complete access to all stages
of the proof process.
The proof method proceeds by the construction
of a tableau (Beth, 1987). This is a tree-structure in
which all the possible models allowed by the
premises and negated conclusion are set out and
examined for consistency. The construction of the
tree is governed by rules for each logical connective
in the language. These rules are closely related to the
semantics of the language. A complete set of such
rules for all truth-functional connectives is given in
(Jeffrey, 1967).
The concept of refutation is considerably more
straightforward in classical logic than it is in NML3.
In the former case, to prove that a set of premises
implies a conclusion A, it is sufficient to show that
~A cannot be true if the premises are. We have seen
that in NML3, “A V ~A” is not a theorem. If we
find no consistent models taking ~A as the negation
of our conclusion, we will not have proven that A
follows from our premises. We will have shown
only that it might.
In NML3 (cf. Obeid 2000), we need to consider
the following cases:
(1) A is true (resp. false) if we can find no
consistent models for M~A (resp. MA)
(2) A is true (resp. false) or unknown if we can find
no consistent models for ~MA (~M~A).
The tableau rules for the connectives & and V
are the same as those for classical logic. The tableau
rule is defined as follows: the negation of an atomic
non-modalized formula A is ~MA and the negation
of MA is simply ~MA.
The tableau rule for can easily be shown to be
of the form:
A B
___|____
| |
~MA ~M~B
The rule
(R3) If we cannot infer ~A, infer MA
require special attention If there exists an open
branch of a proof tree which includes a formula
~MA, or more than one, all including ~MA, then we
might fire rule (R3) to try to infer nonmonotonically
MA and thereby derive a contradiction and finish the
proof. This is achieved by setting the target formula
for proving that ~A is true against the original
premise set, and running the proof process. If we fail
to prove ~A is true, we may infer that it is
consistent, and pass MA back to the parent proof.
It is our strategy that we only attempt to derive a
proof nonmonotonically if we fail to close all the
paths in a tree with the tableau rules. We are able to
decide whenever we wish whether or not we can
infer ~A, but it only makes sense to try when we
know we need to. This means at present that we wait
for the monotonic proof process to stop before
looking for a way to apply the rule (R3).
If we fire (R3) thereby inferring a formula MA,
we may close off any model in the proof including
the formula ~MA. We may allow several
applications of (R3) in one proof, thereby closing
different branches of the proof in different ways.
6 DIALOGUE AND REASONING
WITHIN NML3
In this section we give two examples that show
how dialogue, together with the reasoning within the
system NML3, is carried out. We shall not present
formally the proofs in NML3 due to lack of space.
Example 6.1. Consider a case where we have two
agents, g1 and g2, cooperate in a diagnostic task of a
series of batteries. g1 is in charge of testing the
voltage of the batteries and g2’s task is to find out
which battery is faulty.
Consider a battery which when operating
normally has a voltage between 1.2 volts and 1.6
volts. We use Batt(B) to mean that B is a battery,
Volt(B,V) to mean that the voltage of B is V and
ok(V) to mean that 1.2 < V < 1.6.
Suppose that we have Batt(B
1
) and Batt(B
2
) and
g1 observed that “OBS=Volt(Series(B
1
,B
2
),1.45)”.
Then it cannot be the case that both B
1
and B
2
are
working normally. To appreciate how subtle and
intuitive the results are, we shall consider what g2
can infer in such a situation:
(i) should g2 infer that if one of the batteries is not
working normally then the other is?
(ii) should g2 infer that if one of the batteries is
working normally then the other is not?
It is a straightforward exercise to show that the
answer to (i) is negative and the answer to (ii) is
positive.
TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE
187
Example 6.2 Assume that we have two agents g1
and g2. g2 needs a lift in a car. It notices a car
parked in front of John's house and decides to ask g1
the following query: can John drive? g1 knows that
john is skilled to drive and has a car. Using classical
logic, g1 should fail to drive that John can drive
because it does not know whether John has a driving
licence. Using NML3, g1 can give g2 the answer:
Yes. To be more helpful, g1 may give the answer:
Yes, if John has a driving licence. Such an answer
can be reached because NML3 allows g1 to make
the assumption that John has a driving licence(if
there is no information to the contrary).
However, if g1 has learnt from another agent g3
that John does not have a driving licence, then g1
may give an answer: No or more informatively: No,
because John does not have a driving licence.
7 RELATED WORK
Acquiring knowledge from a domain expert is
considered to be one of the most important and
difficult stage in developing a successful Knowledge
Based System (KBS) (Smith, 1996). The process of
KA, together with representation, is defined as
being a means whereby information is extracted,
structured and organized (Jeng, 1996). (Chan, 1995)
proposes that there exists a need to avoid looking at
KA process as an entire process but perceives it
more like a series of identifiable phases: (1)
knowledge elicitation (to obtain information from
the expert), (2) knowledge analysis (to make sense
of the data acquired in the former stage) and (3)
knowledge representation.
In multi-agent communication languages, such
as KQML (Finin et al. 1993) and COSY (Haddadi,
1996), the emphasis is at the level of individual
messages, along with a relative neglect of overall
task, knowledge modeling. The framework of the
COMMONKADS methodology (Schreiber et al.,
1994) provides a comprehensive conceptual
modeling approach, ranging from organizational
analysis to system design and implementation.
However, it does not provide us with the formalism
and the reasoning mechanism that allow us to learn
from the message exchanged.
Most existing spoken dialogue systems focus on
simple and constrained tasks. Some examples are
found in (Pellom et al. 2001; Xu and Rudnicky
2000; Chu-Carroll 2000).
There has been other work on modelling
dialogue for complex task domains such as the
TRAINS system (Allen et al. 2001) and its
successor, TRIPS (Blaylock et al. 2002). TRIPS is a
distributed, agent-based cooperative dialogue
system. Its components act asynchronously and
communicate with each other by message passing.
Issues in supporting multi-modal interfaces have
been addressed in (McGlashan 1996) which
provides a combination of graphical and speech
modalities. Work in (Traum et al. 2003) follows the
framework of the TRINDI project (Larsson 2000)
which aims to model multi-modal dialogue for
multiple participant interaction.
An attempt in (Paek and Horvitz 2000) is made
to build a probabilistic model (using Bayesian
networks) of possible uncertainties at different levels
of human-computer conversation. Thus the system
would be able to identify actions that maximize the
expected utility of achieving mutual understanding.
8 CONCLUDING REMARKS
In this paper, we have made a first step towards
developing a model of KA/learning via cooperative
dialogue. A key idea in the model is the concept of
integrating exchanged information within an agent
theory. Dialogue is a structured process and the
structure is relative to what an agent knows about
the world or a domain of discourse. We have
employed a logic system NML3 which formalizes
some aspects of revisable reasoning. We have
presented a formalization of some basic dialogue
moves and the protocols of various types of
dialogue. We have also given some indication as to
how arguments, proofs, appropriate dialogue moves
and reasoning may be carried out within NML3.
On the linguistic side, the question of how a
collection of propositions is assigned as the semantic
interpretation to a linguistic message is not trivial.
Lexical and (to a lesser extent) structural ambiguity
are sensitive to what an agent knows about the
world, but "unfortunate" interpretations may still be
consistent with respect to an agent's theory.
There is a general tendency to consider
inconsistency, in agent's, say g, theory, to be a
problem that concerns only g. However, in
cooperative activities that involve more than one
agent, it may be of interest to the other agents to
know about, or minimally to be aware, of the way
inconsistency or exchanged information is dealt with
by g. This is because in such cases, one agent may
regard another agent's knowledge as in some weak
sense an extension of its own. Thus, there may be a
need to define a notion of compatibility which is
weaker and more permissive than localized logical
consistency.
ICEIS 2007 - International Conference on Enterprise Information Systems
188
REFERENCES
Allen J., Byron D., Dzilovska M., 2001, Towards
Conversational Human-Computer Interaction, AI
Magazine, 22(4), 27-37.
Blaylock N., Allen J., and G. Ferguson G., 2002,
Synchronization in an Asynchronous Agent-Based
Architecture for Dialogue Systems, In Proc. of 3rd
SIGdial Workshop on Discourse and Dialog, 1-10.
Chan, C.W., 1996, Knowledge Modeling for Constructing
an Expert System to support reforestation decisions,
Knowledge Based Systems, 9, 41-59.
Chu-Carroll j., 2000, MIMIC: An Adaptive Mixed
Initiative Spoken Dialogue System for Information
Queries, In Proc. 6
th
ACL Conf. on Applied Natural
Language Processing, 97-104.
Dung, P.M., 1995, On the acceptability of arguments and
its fundamental role in non-monotonic reasoning, logic
programming and n-person games, Artificial
Intelligence, 77, 321-357.
Finin, T., Fritzson R. Mackey D., McEntire R., 1994,
KQML as an agent Communication Language, In:
Proc. of 13
th
International Conf. on Information and
Knowledge Management, ACM Press, New York.
Frans H. van Emeren F. H. and Rob Grootendorst R.,
1992, Argumentation, Communication and Fallacies,
Hillsdale, N. J. Erlbaum.
Grice, H P, 1975, Logic and Conversation, The Logic of
Grammar, ed. Donald Davidson and Gilbert Harman,
Encino, California, Dickenson, 64-75.
Haddadi, A., 1996, Communication and Cooperation in
Agent Systems, Lecture Notes in Computer Science
Series No. 1056, Springer-Verlag.
Jeng, B., 1996, Interactive Induction of Expert
Knowledge, Expert Systems with Applications 10,
393-401.
Larsson S., Traum D., 2000, Information State and
Dialogue Management in the TRINDI Dialogue Move
Engine Toolkit. Natural Language Engineering, 6,
323-340.
Maudet, N. and Evrard F., 1998, A generic framework for
dialogue game implementation. In: Proceedings of the
Second Workshop on Formal Semantics and
Pragmatics of Dialog, Universite Twente, The
Netherlands.
McGlashan S., 1996, Towards Multimodal Dialogue
Management. In Proc. of 11
th
Twente Workshop on
Lanugage Technology, 13-22.
Obeid N., 1996, Three Valued Logic and Non-monotonic
Reasoning, Computers and Artificial Intelligence, Vol.
15, No. 6, 509-530.
Obeid N., 2000, Towards a Model of Learning Through
Communication, International Journal of Knowledge
and Information Systems. Vol 2, 498-508, Springer-
Verlag, USA.
Obeid N., 2005, A Model-Theoretic Semantics for Default
Logic, WSEAS Transactions on Computers, Vol. 4,
No. 6, 581-590.
Reed, C.A., Long, D.P., Fox, M. and Garagnani, M., 1997,
Persuasion as a Form of Inter-Agent Negotiation, In:
Lukose, D., Zhang, C., (eds), Proc. of 2nd Australian
Workshop on DAI, Springer Verlag, Berlin
Reiter R., 1980, A Logic for Default Reasoning, Artificial
Intelligence 13, 81-132.
Schreiber A., Wielinga B. J., Akkermans J., Van De
Velde W. and R. de Hoog R., (1994: CommonKADS -
A Comprehensive Methodology for KBS
Development, IEEE Expert 9(6), 28-37.
Smith, P. (1996) An Introduction to Knowledge
Engineering, International Thompson Computer Press.
Toulmin, S. 1958, The uses of argument, Cambridge
University Press, England.
Paek T., Horvitz E., 2000, Conversation as Action Under
Uncertainty, In Proc. of 16
th
Conf. on Uncertainty in
Artificial Intelligence, 455-464.
Pellom B., Ward W., Hansen J., Hacioglu K., Zhang J.,
Yu X., Pradhan S., 2001, University of Colorado
Dialog Systems for Travel and Navigation, In Proc. of
2001 Human Language Technology Conference.
Traum D., Rickel J., Gratch J., Marsella S., 2003,
Negotiation Over Tasks in Hybrid Human-Agent
Teams for Simulation-Based Training, In Proc. of 2
nd
International Joint Conference on Autonomous Agents
and Multiagent Systems, 441-448.
Walton, D. N., 1992, Types of dialogue, dialectical shifts
and fallacies. In: Emeren, F. H. v., Grootendorst, R.,
Blair, J. A., and Willard, C. A., editors, Argumentation
illuminated, 133–147, Amsterdam.
Walton, D. and Krabbe, E., 1995, Commitment in
Dialogue: Basic Concepts of Interpersonal Reasoning.
State University of New York Press, USA.
Xu W., Rudnicky A., 2000, Task-Based Dialog
Management Using an Agenda, In Proc. of
ANLP/NAACL Workshop on Conversational Systems,
42-47.
TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE
189