aim of contributing to the action selection process.
The use of preference rules has been extensively stud-
ied in Logic Programming, both on the theoretical and
practical side. The idea is to compute all the exe-
cutable rules whose activation level is above a certain
threshold, and then to use preference reasoning to se-
lect the one that becomes active (and not just the one
with greatest activation). This allows for more flexi-
bility, with a new candidate threshold parameter, and
an extra level of control with context sensitive prefer-
ences. Moreover, these could be updated by the sys-
tem (by allowing preference rules in R).
Other techniques developed by the Logic Program-
ming community could be applied here. For example,
belief revision techniques could be used to resolve
cases of conflicting rules when more than one are al-
lowed to become active (at the moment only one rule
can become active and consequently only one action
at a time can be sent to the actuator); and rule up-
date techniques in the spirit of EVOLP (Alferes et al.,
2002). The generalization of the language L to full
EVOLP would allow for non-deterministic evolutions
(chose one arbitrarily or according to some proba-
bility). Further, genetic algorithms could be used to
tune the global parameters of the network to select the
most effective action selection from a population. In
this way, a set of parameters can be evolved instead
of being tuned by hand (see (Singleton, 2002) for a
discussion).
Finally, to tackle the problem of modelling very
complex environments we may design and construct
networks of behaviour networks, either with hierar-
chical or distributed structure, or even behaviour net-
works that fight on another to acquire control.
For example, in a scenario where we need to con-
trol a complex building consisting of several floors,
we may employ a number of behaviour networks,
each controlling a different apartment at every floor,
and then organize them in a hierarchical network
where the behaviour networks higher up in the hierar-
chy have the role of supervising those at lower levels.
REFERENCES
Alferes, J. J., Brogi, A., Leite, J. A., and Pereira, L. M.
(2002). Evolving logic programs. In Proceedings of
the 8th European Conf. on Logics in Artificial Intelli-
gence (JELIA’02), LNCS 2424, pp. 50–61.
Antsaklis, P. J. and Nerode, A. (1998). Hybrid control sys-
tems: An introductory discussion to the special issue.
IEEE Trans. on Automatic Control, 43(4):457–460.
Guest Editorial.
Brooks, R. A. (1986). A robust layered control system for
a mobile robot. IEEE J. of Robotics and Automation,
2(1):14–23.
Davidsson, P. and Boman, M. (2000). A multi-agent
system for controlling intelligent buildings. Proc.
4th Int. Conf. on MultiAgent Systems, pp. 377–378.
Davoren, J. M. and Nerode, A. (2000). Logic for hybrid sys-
tems. Proc. of IEEE Special Issue on Hybrid Systems,
88(7):985–1010.
Franklin, G. F., Powell, J. D., and Emami-Naeini, A. (2002).
Feedback Control of Dynamic Systems. Prentice hall.
Franklin, S. (1995). Artificial Minds. MIT Press.
Hagras, H., Callaghan, V., Colley, M., Clarke, G., Pounds-
Cornish, A., and Duman, H. (2004). Creating
an ambient-intelligence environment using embedded
agents. IEEE Intelligent Systems and Their Applica-
tions, 19(6):12–20.
InterProlog (2004). Declarativa. Available at
www.declarativa.com/InterProlog/
default.htm.
Java Technology (2004). Sun microsystems. Available at
http://java.sun.com.
Koutsoukos, X. D., Antsaklis, P. J., Stiver, J. A., and Lem-
mon, M. D. (2000). Supervisory control of hybrid sys-
tems. Proc. of IEEE, Special Issue on Hybrid Systems,
88(7):1026–1049.
Maes, P. (1989). How to do the right thing. Connection
Science Journal, Special Issue on Hybrid Systems,
1(3):291–323.
Maes, P. (1991). A bottom-up mechanism for behavior se-
lection in an artificial creature. In Meyer, J. A. and
Wilson, S. (eds.), Proc. of the first Int. Conf. on Simu-
lation of Adaptive Behavior. MIT Press.
Minsky, M. (1986). The Society of Mind. Simon and Schus-
ter, New York.
Mozer, M. M. (1998). The neural network house. an en-
vironment that adapts to its inhabitants. In Coen, M.
(ed.), Proc. of the Ameriacan Association for Artificial
Intelligence Spring Symposium on Intelligent Environ-
ments, pp. 110–114.
Rutishauser, U., Joller, J., and Douglas, R. (2005). Control
and learning of ambience by an intelligent building.
IEEE Trans. on Systems, Man and Cybernetics, Part
A, 35(1):121–132. Special Issue on Ambient Intelli-
gence.
Singleton, D. (2002). An Evolvable Approach to
the Maes Action Selection Mechanism. Mas-
ter Thesis, University of Sussex. Available at
http://www.informatics.susx.ac.uk/
easy/Publications.
Tu, X. (1999). Artificial Animals for Computer Animation:
Biomechanics, Locomotion, Perception, and Behav-
ior. PhD thesis, ACM Distinguished Ph.D Disserta-
tion Series, LNCS 1635.
van Beek, B., Jansen, N. G., Schiffelers, R. R. H., Man,
K. L., and Reniers, M. A. (2003). Relating chi to hy-
brid automata. In S. Chick, P.J. S
´
anchez, D. F. and
Morrice, D. (eds.), Proc. of the 2003 Winter Simula-
tion Conference, pp. 632–640.
XSB-Prolog (2004). XSB Inc. Available at xsb.
sourceforge.net.
MODELLING HYBRID CONTROL SYSTEMS WITH BEHAVIOUR NETWORKS
105