simulation engine. Each command includes a special
keyword “Do:” and a valid institutional level mes-
sage, e.g.“Do:EnterScene(Meeting)”.
The nodes of the learning graph are seen as inter-
nal states of the agent, the arcs determine the mech-
anism of switching between states and P(Node) de-
termines the probability of changing the agent’s cur-
rent state to the state determined by the next node.
Once the agent reaches a state S(Node
i
) it considers
all the nodes connected to Node
i
that lead to the goal
node and conducts a probability driven selection of
the next node (Node
k
). If Node
k
is found: the agent
changes its current state to S(Node
k
) by executing the
best matching sequence of the lower abstraction level
stored on the arc that connects Node
i
and Node
k
. If
there are no such actions present on the arc - the agent
sends the message associated to Node
k
and updates
it’s internal state accordingly. This process is contin-
ued recursively for all the abstraction levels.
The parameters currently observed by the agent
must match the parameters of the selected sequence as
close as possible. To do so the agent creates the list of
parameters it can currently observe and passes this list
to a classifier (currently, a nearest neighbor classifier
(Hastie and Tibshirani, 1996)). The later returns the
best matching sequence and the agent executes each
of its actions. The same procedure continues until the
desired node is reached.
4 CONCLUSIONS
We have developed our argument for the need of im-
plicit training of virtual agents participating in 3D
Electronic Business Environments and highlighted
the role of the environment itself in the feasibility
of implicit training. Formalizing the environment
with Virtual Institutions can significantly simplify the
learning task. However, for the agent to use these for-
malization successfully it requires specific data struc-
tures to operate with. The paper has presented an ex-
ample of the data structure called recursive-arc graphs
that could be used by the agents participating in Vir-
tual Institutions. Future work includes the develop-
ment of the prototype that would confirm that such
data structures are indeed suitable for training believ-
able agents in 3D electronic business environments.
ACKNOWLEDGEMENTS
This research is partially supported by an ARC
Discovery Grant DP0879789, the e-Markets Re-
search Program (http://e-markets.org.au), projects AT
(CON-SOLIDER CSD2007-0022), IEA (TIN2006-
15662-C02-01), EU-FEDER funds, and by the Gener-
alitat de Catalunya under the grant 2005-SGR-00093.
REFERENCES
Aleotti, J., Caselli, S., and Reggiani, M. (2003). Toward
Programming of Assembly Tasks by Demonstration in
Virtual Environments. In IEEE Workshop Robot and
Human Interactive Communication., pages 309 – 314.
Alissandrakis, A., Nehaniv, C. L., and Dautenhahn, K.
(2001). Through the Looking-Glass with ALICE: Try-
ing to Imitate using Correspondences. In Proceed-
ings of the First International Workshop on Epige-
netic Robotics: Modeling Cognitive Development in
Robotic Systems, pages 115–122, Lurid, Sweden.
Bauckhage, C., Gorman, B., Thurau, C., and Humphrys, M.
(2007). Learning Human Behavior from Analyzing
Activities in Virtual Environments. MMI-Interaktiv,
12(April):3–17.
Biuk-Aghai, R. P. (2003). Patterns of Virtual Collabora-
tion. PhD thesis, University of Technology Sydney,
Australia.
Bogdanovych, A. (2007). Virtual Institutions. PhD thesis,
University of Technology, Sydney, Australia.
Breazeal, C. (1999). Imitation as social exchange between
humans and robots. In Proceedings of the AISB Sym-
posium on Imitation in Animals and Artifacts, pages
96–104.
Esteva, M. (2003). Electronic Institutions: From Spec-
ification to Development. PhD thesis, Institut
d’Investigaci
´
o en Intellig
`
encia Artificial (IIIA), Spain.
Gorman, B., Thurau, C., Bauckhage, C., and Humphrys,
M. (2006). Believability Testing and Bayesian Imita-
tion in Interactive Computer Games. In Proceedings
of the 9th International Conference on the Simulation
of Adaptive Behavior (SAB’06), volume LNAI 4095,
pages 655–666. Springer.
Hastie, T. and Tibshirani, R. (1996). Discriminant adap-
tive nearest neighbor classification and regression. In
Touretzky, D. S., Mozer, M. C., and Hasselmo, M. E.,
editors, Advances in Neural Information Processing
Systems, volume 8, pages 409–415. The MIT Press.
Huang, T. S., Nijholt, A., Pantic, M., and Pentland, A., edi-
tors (2007). Artifical Intelligence for Human Comput-
ing, ICMI 2006 and IJCAI 2007 International Work-
shops, volume 4451 of Lecture Notes in Computer
Science. Springer.
Le Hy, R., Arrigony, A., Bessiere, P., and Lebeltel,
O. (2004). Teaching bayesian behaviors to video
game characters. Robotics and Autonomous Systems,
47:177–185.
Livingstone, D. (2006). Turing’s test and believable AI in
games. Computers in Entertainment, 4(1):6–18.
Maatman, R. M., Gratch, J., and Marsella, S. (2005). Nat-
ural behavior of a listening agent. Lecture Notes in
Computer Science, pages 25–36.
TRAINING BELIEVABLE AGENTS IN 3D ELECTRONIC BUSINESS ENVIRONMENTS USING RECURSIVE-ARC
GRAPHS
345