sure handy when reflecting upon an agent’s behav-
ior, but may not be necessary, or even desirable, when
performing the same acts.
The principle of the frame of reference may be il-
lustrated through the parable with the ant, presented
by Herbert A. Simon, (Simon, 1969). Imagine an ant
making its way over the beach, and that the way it
chose was traced. When observing all the twists and
turns the ant made, one may be tempted to infer a
fairly complex internal navigation process. However,
the complexity of the path may not be the result of
the complexity of the ant, but the result of interac-
tion between a relatively simple control system, and a
complex environment.
Long before Brooks presented his ideas on reac-
tive robotics (Brooks, 1986; Brooks, 1990; Brooks,
1991b), it was shown that complex behavior could
emerge from simple systems, for example through the
Homeostat (Ashby, 1960) and Machina speculatrix
(Walter, 1963). Furthermore, Braitenberg’s Vehicles
(Braitenberg, 1986) was one of the most important in-
spiration sources for Brooks’s work.
This discussion constitutes a central part of the
criticism against deliberative systems and the motiva-
tion for a reactive approach. However, since reactive
systems do not define any ontology with meaning-
ful inputs, many types of, typically sequential tasks,
are very hard to represent in this manner. Even
though several examples of reactive systems show-
ing deliberative-like behaviors exist, for example Toto
(Matari
`
c, 1992) and the reactive categorization robot
by Pfeifer and Scheier (Pfeifer and Scheier, 1997),
both the systems and the task they solve are typically
handcrafted, making them appear more as cute exam-
ples of clever design than solutions to a real problem.
The difference between reactive and deliberative
systems has been described as the amount of compu-
tation performed at run-time, (Matari
`
c, 1997). A reac-
tive control system can be derived from a planner, by
computing all possible plans off-line beforehand, and
in this way create a universal plan (Schoppers, 1987).
This argument about on-line computation beauti-
fully points out how similar the two approaches of re-
active and deliberative control may be. Still, when
proposing the reactive approach, Rodney Brooks
pointed out a number of behavioral differences to
classical deliberative systems: “robots should be sim-
ple in nature but flexible in behavior, capable of act-
ing autonomously over long periods of time in uncer-
tain, noisy, realistic, and changing worlds”, (Brooks,
1986). So if a reactive controller is merely a pre-
computed plan, why these differences in behavior?
One critical issue is speed. Brooks often points
out the importance of real-time response and that the
cheap design of reactive systems allows much faster
connections between sensors and actuators than the
deliberative planners, (Brooks, 1990). Even though
this was an important point in the early nineties, the
last years’ increase in computational power allows
continuous re-planning within a reactive time frame,
(Dawson, 2002).
Another reason may be that reactive controllers
are typically not derived from planners. Rather, reac-
tive controllers are handcrafted solutions specialized
for a certain type of robot. Achieving a specific com-
plex behavior in a reactive manner can be a challenge,
which may be one important reason for the limited
success of reactive systems in solving more complex,
sequential tasks (Nicolescu, 2003). Taking Matari
´
c’s
point about run-time computation into account, the re-
active approach still does not propose a clear way to
achieve a desired controller; it only shows that the de-
liberative part can be removed when intelligence has
been compiled into reactive decision rules.
Hybrid systems do obstacle avoidance using reac-
tive controllers not because re-planning is computa-
tionally heavy, but because re-planning is difficult to
implement. Even though one could imagine a plan-
ner generating exactly the same behavior as one of
Braitenberg’s vehicles avoiding obstacles, the struc-
ture of such a planner would probably be much more
complicated than the corresponding controller formu-
lated in reactive terms. This may in fact, at least
from an engineer’s point of view, be the most suit-
able distinction between the reactive and deliberative
perspectives. It appears that behaviors like obstacle
avoidance and corridor following is easily formulated
in reactive terms, while selecting a suitable path from
a known map is better formulated using a planner.
Other things, actually most things, are too hard to
manually design using any of these two approaches.
1.2 Emergence of Behavior
As mentioned in the previous section, supporters of
the reactive approach freely admit that the implemen-
tation of high-level deliberative-like skills in reactive
systems is very difficult, (Pfeifer and Scheier, 2001;
Nicolescu, 2003). The route to success is often said
to be emergence, (Maes, 1990; Matari
`
c, 1997; Pfeifer
and Scheier, 1997). But what exactly does this mean?
The term emergent is commonly described as
something that is more than the sum of its parts, but
apart from that it is in fact hard to arrive at a defini-
tion suitable for all uses of the term, (Corning, 2002).
Within the field of intelligent robotics, emergence is
used to point out that a robot’s behavior is not explic-
itly defined in the controller, but something that ap-
COGNITIVE PERSPECTIVES ON ROBOT BEHAVIOR
375