2 DIALOGUE AND LEARNING: A
BRIEF DESCRIPTION OF
RELATED LITERATURE
Several papers deal with human learning via dialogue
(DA 91). Those related to computer devices usu-
ally rely on human-machine dialogue models (Bak
94; Coo 00). However, for artificial agents only, the
very few papers about communication as an acquisi-
tion mode are in the framework of noncognitive envi-
ronment like robots (AMH 96) or noncognitive soft-
ware agents. It seems that, in artificial systems, learn-
ing is often realized without dialogue.
Learning without Dialogue. There are many kind of
learning methods for symbolic agents like reinforce-
ment learning, supervised learning (sometimes us-
ing communication as in (Mat 97)), without speaking
about neural networks models that are very far from
our domain. This type of learning prepares agents
for typical situations, whereas, a natural situation in
which dialogue influences knowledge acquisition, has
a great chance to be unique and not very predictable
(RP 00).
Dialogue Models. Most dialogue models in com-
puter science (namely in AI) are based on intentions
(AP 80; CL 92), rely on the Speech Act Theory (Aus
62; Sea 69), to define dialogue as a succession of
planned comunicative actions modifying implicated
agents’ mental state, thus emphasizing the importance
of plans (Pol 98). When agents are in a knowledge ac-
quisition or transfer situation, they have goals: teach
or learn a set of knowledge chunks. However, they
do not have predetermined plans: they react step by
step, according to the interlocutor’s answer. This is
why an opportunistic model of linguistic actions is
better than a planning model. Clearly, a tutored learn-
ing situation implies a finalized dialogue (aiming at
carrying out a task) as well as secondary exchanges
(precision, explanation, confirmation and reformula-
tion requests can take place to validate a question or
an answer). We have chosen to assign functional roles
(FR) to speech acts since this method, described in
(SFP 98), allows unpredictable situations modelling,
and tries to compute an exchange as an adjusment be-
tween locutors mental states. We have adapted this
method, originally designed for human-machine dia-
logue, to artificial agents.
Reasoning. Reasoning, from a learning point of view,
is a knowledge derivation mode, included in agent
functionalities, or offered by the ’teacher’ agent. Rea-
soning modifies the recipient agent state, through a set
of reasoning steps. Learning is considered as the re-
sult of a reasoning procedure over new facts or pred-
icates, that ends up in engulfing them in the agent
knowledge base. Thus, inspired from human behav-
ior, the described model acknowledges for three types
of reasoning: deduction, induction and abduction.
Currently, our system uses inductive and deductive
mechanisms. Abduction is not investigated as such,
since we consider dialogue as an abductive bootstrap
technique which, by presenting new knowledge, en-
ables knowledge addition or retraction and therefore
leads to knowledge revision (JJ 94; Pag 96).
Last, although our system is heavily inspired from
dialogue between humans and from human-machine
dialogue systems, it differs from them with respect to
the following items :
• Natural language is not used as such and a formal-
based language is prefered, in the tradition of lan-
guages such as KIF, that are thoroughly employed
in artificial agents communication. These formal
languages prevent problems that rise from the am-
biguity intrinsic to natural language.
• When one of the agents is human, then his/her
knowledge is opaque not only to his/her interlocu-
tor (here, the system) but also to the designer of the
system. Therefore, the designer must build, in his
system, a series of “guessing” strategies, that do not
necessarily fathom the interlocutor’s state of mind,
and might lead to failure in dialogue. Whereas,
when both agents are artificial, they are both trans-
parent to the designer, if not to each other. Thus,
the designer embeds, in both, tools for communi-
cation that are adapted to their knowledge level.
The designer might check, at any moment, the state
variables of both agents, a thing he cannot do with
a human.
These two restrictions tend to simplify the problem,
and more, to stick to the real core of the task, i.e.,
controlling acquisition through interaction.
3 THE THEORETICAL
FRAMEWORK
3.1 Agents Frame
Our environment focuses on a situation where two
cognitive artificial agents are present, and their sole
interaction is through dialogue. During this relation-
ship, an agent will play the role of a ’teacher’ and
the other will momentarily act as a ’student’. We as-
sume they will keep this status during the dialogue
session. Nevertheless, role assignation is temporary
because it depends on the task to achieve and on each
agent’s skills. The ’teacher’ agent must have the re-
quired skill to teach to the ’student’ agent, i.e., to offer
unknown and true knowledge, necessary for the ’stu-
dent’ to perform a given task. Conventionally, ’stu-
dent’ and ’teacher’ terms will be used to refer, respec-
tively, to the agents acting as such. The ’teacher’ aims
ICEIS 2005 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
202