implicit, e.g. the user assesses that doing D would be
more unpleasant than pleasant. By implementing its
communicative strategy and tactics, the computer
has to try to influence the partner model in the way
that would cause the partner to make a positive
decision based on the changed model. The problem
is that the computer does not “know” the real
weights attributed to different aspects of D by the
user. It can only guess these values based on the
user’s negative responses.
At the beginning of a dialogue the computer
randomly generates a user model. At the moment we
have set only one restriction: we require that the
initial model should satisfy the assumption(s) that
underlie the corresponding reasoning procedure.
Thus, for enticing w(pleasant) > w(unpleasant), for
persuading w(useful) > w(harmful) and for
threatening w(obligatory) = 1. When an initial model
is generated the computer uses it as a partner model
and informs the user about its communicative goal.
It chooses a sentence (r
B1
) from a special file of
computer sentences. A user can choose his sentences
r
Ai
(i=1,2,...) from a special file of user sentences, i.e.
he can “play a role” but cannot use unrestricted
texts. If a user has chosen a sentence of refusal, the
computer decides that the user model is inexact and
needs amending. The corresponding class of user
sentences of refusal will be recognized and the
aspect of D determined the weight of which in the
user model was either too small or too great, which
brought about the false decision by the computer.
Based on a valid reasoning procedure (and tactics) a
new value will be computed for this weight, which is
congruent with the negative decision (as explicated
by the user expression).
Our research has a practical aim: to implement a
communication trainer, a computer program that
would allow the user to exercise his abilities to reach
certain communicative goals: (a) getting the partner
to decide to perform an action, or (b) on the
contrary, opposing the partner (Koit, 2012).
5 FUTURE WORK
We have examined here only a very restricted type
of dialogues where the user must play a particular
rigid role. In the future we plan to model such
situations where the computer will take the
participant A’s role. In order to do that, A’s strategies
and tactics need to be modeled.
One of our priorities will be to investigate the
possibilities of adding contextual aspects to the
reasoning model. One option is to include the
personal background of the participants, e.g. by
elaborating the notion of communicative space
(Brown and Levinson, 1999). In our case, the
communicative space is determined by a number of
coordinates, such as social distance between the
partners (far between adversaries, close between
friends), intensity of communication (peaceful,
vehement), etc. Without taking this information into
account, formal reasoning about some action can
easily run into problems such as inconsistency, due
to considering the knowledge in a wrong context,
inefficiency, when irrelevant knowledge is being
considered, or incompleteness, when the relevant
inferences are not made.
ACKNOWLEDGEMENTS
This work is supported by the European Regional
Development Fund through the Estonian Centre of
Excellence in Computer Science (EXCS), the
Estonian Research Council (grant ETF9124), and the
Estonian Ministry of Education and Research (grant
SF0180078s08).
REFERENCES
Brown, P. and Levinson, S. C., 1999. Politeness: Some
universals in language usage. In The discourse reader.
London: Routledge, 321–335.
D´Andrade, R., 1987. A Folk Model of the Mind. In
Cultural Models of Language and Though. London:
Cambridge University Press, 112–148.
Davies, M. and Stone, T., 1995. Folk psychology: the
theory of mind debate. Oxford, Cambridge,
Massachusetts: Blackwell.
Ginzburg, J. and Fernández, R., 2010. Computational
Models of Dialogue. In The Handbook of
Computational Linguistics and Natural Language
Processing, 429–481. Wiley Blackwell Publishers.
Hutchby, I. and Wooffitt, R., 1998. Conversation Analysis.
Principles, Practices and Applications. Cambridge,
UK: Polity Press.
Jokinen, K., 2009. Constructive Dialogue Modelling:
Speech Interaction and Rational Agents. John Wiley
& Sons Ltd.
Jurafsky, D. and Martin, J. H., 2008. Speech and
Language Processing: An Introduction to Natural
Language Processing, Computational Linguistics, and
Speech Recognition. Prentice Hall.
Koit, M., 2012. Developing Software for Training
Argumentation Skills. In Proc. of CMNA 2012,
Workshop of ECAI 2012, 11–15. Montpellier, France.
http://www.cmna.info/
Koit, M., 2011. Conversational Agent in Argumentation:
FromStudyofHuman-humanDialoguestoReasoningModel-ConversationalAgentinArgumentationDialogue
215