PSI architecture used by Bach and Vuine (Bach and
Vuine, 2003) and also by Lim (Lim et al., 2005), that
offer a set of drives of different type, as certainty,
competence or affiliation.
Most of these works on emotional agents are
based on reactive behaviors as the work by Ca
˜
namero.
When a drive is detected, it triggers a reactive compo-
nent that tries to compensate its deviation, taking into
account only the following one or two actions. Thus,
there is no inference being done on medium-long
term goals and the influence of emotions on how to
achieve those goals. Regarding deliberative models,
there are some works on emotions based on planning,
but mainly oriented to storytelling like emergent
narrative in FEARNOT! (Aylett et al., 2005) and the
interactive storytelling of Madame Bovary on the
Holodeck (Cavazza et al., 2007). The work of Gratch
and coauthors (Gratch, 1999; Gratch et al., 2002)
shows a relevant application of emotional models
to different research areas in artificial intelligence
and autonomous agents design, endowing them with
an ability to think and engage in socio-emotional
interactions with human users.
In the present work, a model of long term rea-
soning based on emotions and factors of personality
has been designed. It follows some ideas introduced
in (Avradinis et al., 2003) using concepts that already
appeared in Ca
˜
namero’s works, like motivations
and the use of drives to represent basic needs. Our
model uses automated planning for providing long
term deliberation on effects of actions taking into
account not only the agents goals, but also the impact
of those actions in the emotional state of the agent.
Other models, as Rizzo’s works (Rizzo et al., 1997),
combine the use of emotions and personality to
assign preferences to the goals of a planning domain
model, but the changes in the emotional state happen
in another module. Thus, they are not really used
in the reasoning process. A similar integration of a
deliberative and a reactive model is the one in (Blythe
and Reilly, 1993) where the emotions reasoning is
performed again by the reactive component.
We have defined a planning domain model that
constitutes the reasoning core of a client in a virtual
and multi-agent world (Fern
´
andez et al., 2008). It is a
client/server game oriented towards the intensive use
of Artificial Intelligence controlled Bots, and it was
designed as a test environment of several Artificial
Intelligence techniques. The game borrows the idea
from the popular video game THE SIMS. Each agent
controls a character that has autonomy, with its own
drives, goals, and strategies for satisfying those goals.
In this implementation, we introduce the concept
of how an agent prefers some actions and objects
depending on its preferences, its personality traits
and its emotional state, and the influence of those
actions on long term achievement of goals. Thus,
agents solved problems improving the quality of the
solution, achieving better emotional states.
The remainder of the paper describes the model
design, the description of the domain that implements
the model, the empirical results that validate the
model and the conclusions derived from the work,
together with future research lines.
2 MODEL DESIGN
Our aim in this work is to include emotions and hu-
man personality traits in a deliberative system, that
uses automated planning in order to obtain more re-
alistic and complex behavior of agents. These be-
haviors are necessary to implement a wide variety of
applications such as agents that help users to change
their way of life, systems related with marketing and
advertising, educational programs, systems that play
video games or automatically generate text. The goal
is to show that the use of emotional features, with the
establishment of preferences about certain actions and
objects in its environment, improves the performance
of a deliberative agent by generating better plans.
In the virtual world, an agent tries to cater for its
needs, its motivations, through specific actions and
interacting with different objects. Five basic needs
have been identified for the agent, which are easily
identifiable in human beings: hunger, thirst, tiredness,
boredom and dirtiness. Along with the first three,
widely used in many systems, we have added dirti-
ness and boredom, which are more domain-specific
to add a wider variety of actions and get richer behav-
iors. These basic needs increase over time, so their
values increase as time goes by. Thus, the agent al-
ways needs to carry out actions to maintain its basic
needs values within reasonable limits.
To cater for each of these basic needs, the agent
must perform actions. For example, it can drink
to satisfy its thirst or sleep to recover from fatigue.
There are different actions to cater for the same need,
and the agent prefers some actions over others. Thus,
the agent may choose to read a book or play a game to
reduce boredom. Besides, the effects of those actions
can be different depending on its emotional state. It
will receive more benefit from applying more active
actions when its emotional state is more aroused and
more passive or relaxed actions when it is calm.
To carry out each of these actions, the agent needs
to use objects of specific types. Thus, it will need
food to eat, a ball to play or a book to read. There are
PROVIDING DELIBERATION TO EMOTIONAL AGENTS
99