2.2 Reasoning Model
After A has expressed his intention (that B does D),
B can respond with agreement or rejection,
depending on the result of her reasoning. We want to
model a "naïve" theory of reasoning that people
themselves use when they are interacting with other
people and trying to predict and influence their
decisions.
The reasoning model consists of two parts: 1) a
model of human motivational sphere; 2) reasoning
schemes. In the motivational sphere three basic
factors that regulate reasoning of a subject
concerning D are differentiated. First, subject may
wish to do D, if pleasant aspects of D for him/her
overweight unpleasant ones; second, subject may
find reasonable to do D, if D is needed to reach
some higher goal, and useful aspects of D
overweight harmful ones; and third, subject can be
in a situation where (s)he must (is obliged) to do D -
if not doing D will lead to some kind of punishment.
We call these factors WISH-, NEEDED- and
MUST-factors, respectively.
It is supposed here that the dimensions
pleasant/unpleasant, useful/harmful have numerical
values and that in the process of reasoning
(weighing the pro- and counter-arguments) these
values can be summed up. For examplee, for the
characterisation of pleasant and unpleasant aspects
of some action there are specific words which can be
expressed quantitatively: enticing, delightful,
enjoyable, attractive, acceptable, unattractive,
displeasing, repulsive etc.
We have represented the model of motivational
sphere of a subject by the following vector of
weights:
w = (w(resources), w(pleasant), w(unpleasant),
w(useful), w(harmful), w(obligatory), w(prohibited),
w(punishment-for-doing-a-prohibited-action),
w(punishment-for-not-doing-an-obligatory-action)).
In the description, w(pleasant), etc. means weight of
pleasant, etc. aspects of D.
The second part of the reasoning model consists of
reasoning schemes that supposedly regulate human
action-oriented reasoning. The reasoning proceeds
depending on the determinant which triggers it
(WISH, NEEDED or MUST). As an example, let us
present a reasoning procedure.
// Reasoning triggered by NEEDED-
determinant
Presumption: w(useful) > w(harmful) //
1. Are there enough resources for
doing D?
2. If not then do not do D.
3. Is w(pleasant) > w(unpleasant)?
4. If not then go to 10.
5. Is D prohibited?
6. If not then do D.
7. Is w(pleasant) + w(useful) >
w(unpleasant) + w(harmful) +
w(punishment-for-doing-a-prohibited-
action)?
8. If yes then do D.
9. Otherwise do not do D.
10. Is D obligatory?
11. If not then do not do D.
12. Is w(pleasant) + w(useful) +
w(punishment-for-not-doing-an-
obligatory-action) > w(unpleasant) +
w(harmful)?
13. If yes then do D.
14. Otherwise do not do D.
3 KNOWLEDGE
REPRESENTATION
3.1 World Knowledge
We are using frames for representing world
knowledge in our system. Let us consider the
following situation: A makes B a proposal to do an
action D. For example, Mary proposes John to make
a potato salad for the party.
There is the frame ACTION in our system:
ACTION
RESOURCES
ACTOR
ACT: a sequence of elementary acts
SETTING: ACTOR has RESOURCES
GOAL
CONSEQUENCE
The frame ACTION has sub-frames, e.g.:
PREPARING-POTATO-SALAD
SUP: ACTION
RESOURCES:
Components: boiled potato,
boiled egg, pickled cucumber, hacked
onion, sour cream, salt, bowl
Skills: take, chop up, mix,
decorate, add
Time: 30 minutes
ACT: take Components; chop up
potato, egg, cucumber; mix in bowl;
decorate with onion; add salt
GOAL, CONSEQUENCE: potato salad
3.2 Communication Knowledge
We are using two kinds of knowledge about
communication: 1) descriptions of dialogue acts
(proposal, question, argument, etc.), and 2)
communication algorithms - communicative
strategies and tactics.
3.2.1 Dialogue Acts
The dynamic parts of dialogue acts work for a
coherent dialogue - there are limited sets of dialogue
acts that can be come after the current act.
3.2.2 Communicative Strategies and Tactics
A communicative strategy is an algorithm used by a
participant for achieving his/her goal in interaction.
KNOWLEDGE REPRESENTATION FOR HUMAN-MACHINE INTERACTION
397