active goal is g
1
(i.e. because g
2
is satisfied, its prior-
ity is −∞). Since c does not know the environment,
c explores it to find o. Figure 3 shows the different
values given to runnable actions by the ASM. At the
beginning, the sole runnable action is explore, hence
it is chosen. While exploring, c loses energy, reach-
ing point 2. Its energy level falls under v, then g
2
re-
ceives a priority depending on the energy value. It im-
plies that the move to apple (to eat it) action becomes
runnable. As shown in the chart, its value is the great-
est, then c choses to move towards the apple. Since its
energy decreases during these moves, the priority of
this action increases too. In point 2’, c eats the apple,
receives energy and g
2
becomes inactive again and c
explores again to find o. Reaching point 3, c perceives
o, trying to open door, c “learns” that it is locked. The
plan proposes then 2 possibilities to pass through the
door: to unlock or to break it. Then two runnable ac-
tions arise corresponding to both alternatives: first,
move towards the previously perceived key, second
explore to find something to break the door. Since
c’s personality leans to be brutal and then c prefers to
break rather to unlock, this explains why the alterna-
tive including the explore action is favoured in com-
parison to the one with move. The latter corresponds
to the lowest curve starting from 3 and the first to the
uppermost curve. The latter is especially high since,
by coincidence, at the same time, g
2
becomes active
again, then explore is again runnable in order to find
some food, and multigoal revalorization promotes ex-
plore. Exploring, c goes “down” and perceives the
axe once at 4. Then runnable action, for break is no
more explore, but take axe, which corresponds to the
new curve in the middle. And explore loses multi-
goal revalorization. This explains why the uppermost
curve weakens. But it still remains the most priori-
tary. Then reaching 5, because of opportunism since
c is close to the axe, the runnable action take axe is
favoured and becomes the most prioritary one. The
peak at 5 is then due to opportunism. The correspond-
ing small collapse of explore is due to the temporary
lose of inertia. Once axe is taken, opportunism moti-
vation disappears and explore becomes again the most
prioritary, then c finds and eats the second apple in 6.
break the door is the action selected by the ASM. c
moves “up” towards the door, breaks it and takes o
(not shown).
This small experiment illustrates the ASM’s
work and the various motivations: opportunism,
goals, preferences, inertia, multigoal revalorization.
Achievements in time and space are not hightlighted
here but have been evaluated in other experiments.
5 CONCLUSIONS
“Always-running” applications are a very constraint
context to behaviour designers. We propose a model
of action selection mechanism defined as a combi-
nation of several motivations. This ASM allows to
define modular, believable and easy to design be-
haviours. Since it is robust to evolutions of the en-
vironment and motivations are understandable, the
designer task of building behaviours is made easier.
Such an ASM can be used to design the behaviour of
believable cognitive situated characters like NPC in
video games. Characters can be easily distinguished
and various personalities can be obtained. A concrete
proposition has been done and experiments have been
made to validate it.
Forthcoming works concern the implementation
of relational evaluators and the carrying out of other
experiments. Simultaneously a collaboration is in
progress with a MMORPG company to use this ASM.
Other motivations are investigated too, for instance
emotional feature.
REFERENCES
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S.,
Lebiere, C., and Qin, Y. (2004). An integrated theory
of the mind. Psychological Review, 111(4).
Ferber, J. (1999). Multi-Agent Systems. An Introduction to
Distributed Artificial Intelligence. Addison Wesley.
Laird, J. E., Newell, A., and Rosenbloom, P. S. (1987).
SOAR: An architecture for general intelligence. Ar-
tificial Intelligence.
Laird, J. E. and van Lent, M. (2000). Human-level AI’s
Killer Application: Interactive Computer Games. In
the 17th Natl Conf. on Artificial Intelligence.
Rosenblatt, J. K. (1995). DAMN: A distributed architecture
for mobile navigation. In the AAAI Spring Symposium
on Lessons Learned from Implemented Software Ar-
chitectures for Physical Agents.
Seth, A. (1998). Evolving action selection and selective at-
tention without actions. In the 5th International Con-
ference on Simulation of Adaptive Behavior.
Tyrrell, T. (1993). Computational Mechanisms for Action
Selection. PhD thesis, University of Edinburgh.
ICAART 2009 - International Conference on Agents and Artificial Intelligence
172