OBSTACLE DETECTION IN MOBILE OUTDOOR ROBOTS
A Short-term Memory for the Mobile Outdoor Platform RAVON
H. Sch
¨
afer, M. Proetzsch and K. Berns
Robotics Research Lab, University of Kaiserslautern, P.O. Box 3049, D-67653 Kaiserslautern, Germany
Keywords:
Obstacle Detection, Obstacle Avoidance, Behaviour-based Navigation, Mobile Robotics, Short-term Memory,
Distributed Minimal World Model, Algorithms, Biologically Inspired Robotics.
Abstract:
In this paper a biologically inspired approach for compensating the limited angle of vision in obstacle detection
systems of mobile robots is presented.
Most of the time it is not feasible to exhaustively monitor the environment of a mobile robot. In order to
nonetheless achieve safe navigation obstacle detection mechanisms need to keep in mind certain aspects of
the environment. In mammals this task is carried out by the creature’s short-term memory. Inspired by this
concept an absolute local map storing obstacles in terms of representatives has been introduced in the obstacle
detection and avoidance system of the outdoor robot RAVON. That way the gap between the fields of vision of
two laser range finders can be monitored which prevents the vehicle from colliding with obstacles seen some
time ago.
1 INTRODUCTION
In recent years complex mobile robotic systems have
been developed for autonomous navigation in various
environments. The more complex a robot’s working
place gets the more sophisticated its obstacle detec-
tion and avoidance facilities need to designed. First
of all, sensor equipment must be appropriate for the
requirements of given scenarios. In most cases such
sensors have a limited field of vision and therefore
cannot cover the complete area around the robot. One
possibility to overcome this limitation is mounting
several devices of the same kind. However, this leads
to higher weight and costs and the evaluation of more
data might exceed the computational capacity.
In mammals the short-term memory is used to
compensate for the relatively small field of vision.
The hindlegs of a cat for example have to avoid ob-
stacles some time after the cat has visually perceived
the hindrance (McVea and Pearson, 2006). Clearly
the cat has to keep in mind certain information about
its local environment in order to safely navigate com-
plex terrain.
In this paper an approach for adopting the princi-
Figure 1: The outdoor robot RAVON in rough terrain.
ple of a local short-term obstacle memory for mobile
outdoor robots is presented. The implemented sys-
tem is used to gather additional information for the
behaviour-based control system of the robot RAVON
(Robust Autonomous Vehicle for Outdoor Naviga-
tion, see Figure 1). For now data of two laser scan-
ners mounted at the front and the rear of the robot
is evaluated and inserted into the obstacle memory.
141
Schäfer H., Proetzsch M. and Ber ns K. (2007).
OBSTACLE DETECTION IN MOBILE OUTDOOR ROBOTS - A Short-term Memory for the Mobile Outdoor Platform RAVON.
In Proceedings of the Fourth International Conference on Informatics in Control, Automation and Robotics, pages 141-148
DOI: 10.5220/0001630701410148
Copyright
c
SciTePress
The approach, however, is kept in a generic way such
that additional sensor systems as e.g. stereo camera
systems can easily be integrated.
The outline of the paper is as follows: In the
next section work related to the topic of this paper
is described. Afterward we give some details on
the outdoor robot RAVON which is used for vali-
dating the presented approach. Section 4 describes
the concept of the short-time memory including some
details about the obstacle detection methods. Sec-
tion 5 shows an experiment used for evaluating the
presented approach. Finally, we conclude with a sum-
mary and directions for future work.
2 STATE OF THE ART
Obstacle avoidance in outdoor terrain is a topic of
several publications. Mostly there is the distinction
between reactive obstacle avoidance and building up
complete geometric maps. Reactive approaches like
(Badal et al., 1994) compute steering vectors ac-
cording to the proximity of obstacles in the current
view. However, for vehicles supporting agile steer-
ing manoeuvres like sideward motion this neglects
the problem of possible collisions outside the field of
view. Other similar approaches ((Kamon et al., 1996),
(Laubach and Burdick, 1999)) add some kind of state
to the obstacle avoidance system but do not explicitly
consider positions of hidden obstacles.
Work describing building up maps which contain
information about terrain elevation ((Shiller, 2000),
(Bonnafous et al., 2001)) also neglect the need to en-
large the virtual field of vision. A high degree of com-
putation is used to evaluate the traversability of the
detected terrain region and to calculate a feasible tra-
jectory. However, in outdoor terrain given paths can-
not be followed precisely due to disturbances. There-
fore, an evaluation of possibly dangerous obstacles
needs to be undertaken during the driving manoeuvre.
The approach presented here can be seen in be-
tween the completely reactive and the mapping ap-
proach. A short-term memory keeping only the rele-
vant information deals as the source for a behaviour-
based system keeping the robot away from currently
relevant obstacles.
3 VEHICLE DESCRIPTION
The platform used to examine obstacle avoidance is
the four wheeled off-road vehicle RAVON (see Fig-
ure 1). It measures 2.35 m in length and 1.4 m in
width and weighs 400 kg. The vehicle features a four
Figure 2: Regions monitored by the obstacle avoidance fa-
cilities.
wheel drive with independent motors yielding maxi-
mal velocities of 3 m/s. In combination with its off-
road tires, the vehicle can climb slopes of 100% incli-
nation predestining it for the challenges in rough ter-
rain. Front and rear axis can be steered independently
which supports agile advanced driving manoeuvres
like double Ackerman and parallel steering.
In order to navigate in a self-dependent fashion,
RAVON has been equipped with several sensors. For
self localisation purposes, the robot uses its odometry,
a custom design inertial measurement unit, a mag-
netic field sensor, and a DGPS receiver. The sensor
data fusion is performed by a Kalman filter (Schmitz
et al., 2006) which calculates an estimated pose in
three dimensions.
In order to protect the vehicle in respect to ob-
stacles, several safety regions are observed by dif-
ferent sensor systems (Sch
¨
afer and Berns, 2006) (see
Fig. 2). First of all, hindrances can be detected using
the stereo camera system mounted at the front of the
vehicle. This obstacle detection facility is comple-
mented with two laser range finders (field of vision:
180 degrees, angular resolution: 0.5 degrees, distance
resolution: about 0.5 cm) monitoring the environment
nearby the vehicle. Data from both sources of prox-
imity data is used for obstacle avoidance by appropri-
ate behaviours, the fusion of which is performed in-
side the behaviour network (Sch
¨
afer et al., 2005). In
case of emergency, the system is stopped on collision
by the safety bumpers which are directly connected to
the emergency stop to ensure maximal safety. In the
future, the compression of the spring system shall be
used to detect occluded obstacles in situations where
geometric obstacle detection cannot be used.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
142
Figure 3: Top-level concept of RAVON’s Obstacle Avoid-
ance Facility.
4 A SHORT-TIME MEMORY FOR
RAVON
In order to compensate for the blind angles of the sen-
sor systems local short-term memories shall succes-
sively be introduced. Any of these can be regarded as
a virtual sensor system which can seamlessly be inte-
grated into RAVON’s control system as described in
(Sch
¨
afer et al., 2005). That way step by step blind
regions can be covered with increasing accuracy as
overlapping virtual sensors yield more and more ad-
ditional information. In this paper the validation of
the general approach and connected technical facili-
ties are presented by the example of closing the gap
between the two laser range finders (See Figure 2).
Listing 1: Declarations.
# Raw s e n s o r i n f o r m a t i o n fro m t h e
# l a s e r r a ng e f i n d e r s
s c a n n e r
d a t a
# FIFO c o n t a i n i n g t h e c a n d i d a t e o b s t a c l e s
o b s t a c l e f i f o
# The m aximal l e n g t h o f t h e q ueue
# a s s u r e s t h a t t he o b s t a c l e l i s t s
# h ave a c e r t a i n age when r e t r i e v e d .
# F u rt h e r m o re t h e s c a n s a r e d i s c a r d e d
# i f t h e v e h i c l e i s n o t moving t o p r e v e n t
# t h e a c c u m u l at i on of o u t d a t e d i n f o r m a t i o n .
m a x n u m be r o f sc a n s
# S t o r e s t h e c u r r e n t r o b o t po s e i n t h e
# r o b ot s a b s o l u t e wo r k i n g c o o rd . s y s t e m
c u r r e n t r o b o t p o s e
# S t o r e s t h e c u r r e n t v e l o c i t y of t h e r o b o t
c u r r e n t v e l o c i t y
# S t o r e s r e p r e s e n t a t i v e s i n t h e b l i n d
# a ng l e t o t h e l e f t o f t h e r o b o t .
s h o r t t e r m m e m o r y l e f t
# S t o r e s r e p r e s e n t a t i v e s i n t h e b l i n d
# a ng l e t o t h e r i g h t o f t h e r o b o t .
s h o r t t e r m m e m o r y r i g h t
# T h r e s ho l d o f t h e m ax imal e u c l i d e a n d i s t a n c e
# b e t ween t h e c u r r e n t v e h i c l e p o s e a nd t h e
# on e a t w hich a s c an was t a k e n .
p r o g r e s s t h r e s h o l d
Figure 3 illustrates the structure of the laser-
scanner-based part of the obstacle avoidance system.
The sensor flow
1
of the main loop which is dealing
with the short-term memory is outlined in Listing 2
(For explanation of the particular variables used see
Listing 1). The first step of the main loop is the de-
tection of obstacles from the laser range data which is
carried out in the Obstacle Detection facility. The cur-
rent vehicle pose in the robot’s absolute working coor-
dinate system is applied to the resulting obstacle lists
in order to yield an absolute reference map through
which the robot can virtually be navigated. After that
the lists are casted into sector maps which are used
by the Obstacle Avoidance behaviours to com-
pute evasive steering commands if necessary.
For the extension described in this paper a FIFO
queue was introduced which holds in stock the ob-
stacle lists of several subsequent scans. In every
sense cycle older obstacle lists are examined for ob-
1
The control concept underlying all rob ot projects at the
Robotics Research Lab strictly separates the main loop into
a buttom up sense and a top down control cycle which are
alternately invoked.
OBSTACLE DETECTION IN MOBILE OUTDOOR ROBOTS - A Short-term Memory for the Mobile Outdoor Platform
RAVON
143
stacle representatives which may have migrated into
the blind angles. Older in this sense can be interpreted
in two ways: a) the scan was taken a certain time ago
(age criterion
2
) or b) the scan was taken a certain dis-
tance ago (progress criterion
3
).
Listing 2: Main Loop.
Sen s e ( )
# D e t e c t o b s t a c l e s f rom r a n g e i n f o r m a t i o n
D e t e c t O b s t a c l e s ( s c a n n e r d a t a ,
o b s t a c l e l i s t f i f o )
i f ( c u r r e n t v e l o c i t y != 0) do
# remove r e p r e s e n t a t i v e s w hi ch a r e no
# l o n g e r i n t h e s c a n n e r s b l i n d a n g l e
Tri m ( s h o r t t e r m m e m o r y l e f t , c u r r e n t r o b o t p o s e )
Tri m ( s h o r t t e r m m e m o r y r i g ht , c u r r e n t r o b o t p o s e )
# c h ec k age c r i t e r i o n ( a )
w h i l e ( S i z e ( o b s t a c l e l i s t f i f o ) >
m a x n u m ber o f s c ans ) do
o b s t a c l e l i s t = F i r s t ( o b s t a c l e l i s t f i f o )
F i l l O bstac l e M e m o r y ( s h o r t t e r m m e m o r y l e ft ,
s h o r t t e r m m e m o r y r i gh t ,
o b s t a c l e l i s t ,
c u r r e n t r o b o t p o s e )
R emo v e Fir s t ( o b s t a c l e l i s t f i f o )
od
# c h ec k p r o g r e s s c r i t e r i o n ( b )
o b s t a c l e l i s t = F i r s t ( o b s t a c l e l i s t f i f o )
w h i l e ( D i s t a n c e ( P ose ( o b s t a c l e l i s t ) ,
c u r r e n t r o b o t p o s e )
> p r o g r e s s t h r e s h o l d ) do
F i l l O bstac l e M e m o r y ( s h o r t t e r m m e m o r y l e ft ,
s h o r t t e r m m e m o r y r i gh t ,
o b s t a c l e l i s t ,
c u r r e n t r o b o t p o s e )
R emo v e Fir s t ( o b s t a c l e l i s t f i f o )
o b s t a c l e l i s t = F i r s t ( o b s t a c l e l i s t f i f o )
od
e l s e
# d i s c a r d o l d e r s c a n s i f n o t moving
w h i l e ( S i z e ( o b s t a c l e l i s t f i f o ) >
m a x n u m ber o f s c ans ) do
R emo v e Fir s t ( o b s t a c l e l i s t f i f o )
od
e s l e
esn e S
If one of the two criteria applies to an obstacle list
FillObstacleMemory checks the obstacle rep-
resentatives and places them into the corresponding
short-term memory. In order to prevent the short-
term memory from overfilling Trim (See Listing 4)
removes obstacle representatives which are no longer
in the robot’s blind angle. Furthermore the obstacle
list FIFO has a limited maximal size. That way only
a limited number of scans is keept in mind and is ex-
amined which minimises memory consumption and
computational expenses.
As the Obstacle Sector Maps the short-
term memories are evaluated by the Obstacle
Avoidance component which computes evasive
steering commands and inhibits the Homing be-
haviours if applicable. The steering commands
2
The age of a scan is modelled implicitly by the maximal
number of scans in the queue.
3
The progress threshold assures that no large gaps oc-
cur in the obstacle memory at higher speeds as scans are
examined earlier.
Figure 4: Screenshot showing clustering of detected range
values (top) and photo showing the scenario (bottom).
from the Homing facility and the Obstacle
Avoidance are fused according to the behaviour ar-
chitecture described in (Proetzsch et al., 2005). If no
obstacles are present Homing successively navigates
along a trajectory given in subsequent target poses
provided by the Navigator component.
4.1 Obstacle Detection
The range data from the laser scanners is first of all
clustered according to point-wise adjacency in order
to filter artefacts and entities which are likely to rep-
resent flexible vegetation (See Figure 4). In this figure
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
144
the current laser scanner data (dark points) are inter-
preted as clusters (red boxes) according to their dis-
tance. Each of the clusters is then evaluated concern-
ing its dimension and its number of cluster represen-
tatives. If both criteria are true clusters are interpreted
as obstacles.
In order to have a uniform interpretation of near
obstacles which can be used by obstacle avoidance
behaviours the sensor coverage area is divided into
sectors. In Figure 4 the closest obstacle in each of
these sectors is indicated by a green line. In Figure 4
the human to the left and the bushes to the right of the
robot have clearly been detected as obstacles. Note
that the bushes are subject to severe noise and that
they are therefore detected as a large number of small
fluctuating objects. For that reason obstacles cannot
be correlated over time which is a problem for local
map building as representatives accumulate over time
if the robot is moving slowly. Using an occupancy
grid would be a solution to this problem but it lacks
accuracy because of rasterisation. Furthermore the
consideration of objects moving independently from
the vehicle which shall be introduced in the future
would be complicated a lot.
4.2 Short-term Memory
As already indicated above obstacles detected in laser
rage data consist of points which represent the bound-
ary of a rigid object. Thus in every sensor cycle a list
of such point clusters
4
is generated and stored in a
FIFO queue for further processing. To every obstacle
list the obstacle detection facility attaches the current
robot pose in an absolute working coordinate system.
In later sense cycles older obstacle lists are re-
trieved from the FIFO queue in order to be examined
for obstacle representatives which might now be in
the blind angle of the laser scanners. Note that in this
context the algorithm drops the abstraction of obsta-
cles in favour of simplicity. This does not result in a
loss of information as all the robot can actually do if
it detects a remembered obstacle at its flank is try to
keep away from it by adding a translational compo-
nent to the steering command. Which representative
belongs to what obstacle is in that sense irrelevant.
To fill the obstacle memory routine Fill-
ObstacleMemory (See Listing 3) examines all ob-
stacles in a provided obstacle list. In order to deter-
mine the relation between the vehicle at its current
pose and the scan points registered earlier, all rep-
resentatives are first of all transferred into the abso-
lute working coordinate system of the robot using the
4
In the following this list shall be referred to as obstacle
list.
Figure 5: Schema of how to determine whether an obstacle
representative resides in the blind region of the vehicle.
pose previously attached to the obstacle list (Subrou-
tine ToAbsoluteContext). Intuitively explained
this is as if you would virtually drive a robot model
through an absolute landscape while checking for
points in the blind regions of the robot. That way it is
not necessary to transform the representatives when-
ever the robot changes its pose.
Listing 3: Fill the Obstacle Memory.
F i l lO b s ta c l e Me m o ry ( s h o r t t e r m m e m o r y l e f t ,
s h o r t t e r m m e m o r y r i g h t ,
o b s t a c l e l i s t ,
c u r r e n t r o b o t p o s e )
f o r a l l ( o b s t a c l e i n o b s t a c l e l i s t ) do
f o r a l l ( r e p r e s e n t a t i v e i n o b s t a c l e ) do
a b s o l u t e r e p r e s e n t a t i v e =
T o A bs o l u te C o n te x t ( P o se ( o b s t a c l e l i s t ) ,
r e p r e s e n t a t i v e )
y = P o i n t O nL in e ( XAxis ( c u r r e n t r o b o t p o s e ) ,
a b s o l u t e r e p r e s e n t a t i v e )
x = P o i n t O nL in e ( YAxis ( c u r r e n t r o b o t p o s e ) ,
a b s o l u t e r e p r e s e n t a t i v e )
i f ( Abs ( y ) < Width ( s h or t t e r m m e mo r y ) / 2 AND
Abs ( x ) < Len g t h ( s h o rt
t e r m m em o r y ) / 2 ) {
i f ( T oRig h t OfRob o t ( c u r r e n t r o b o t p o s e ,
a b s o l u t e r e p r e s e n t a t i v e ) )
Append ( s h o r t t e r m m e m o r y r i g h t ,
a b s o l u t e r e p r e s e n t a t i v e )
e l s e
Append ( s h o r t t e r m m e m o r y l e f t ,
a b s o l u t e r e p r e s e n t a t i v e )
f i
od
od
od
y r o me M e lc a t s bO l l iF
Once in an absolute context the coordinates
can reside where they are. With the subroutine
OBSTACLE DETECTION IN MOBILE OUTDOOR ROBOTS - A Short-term Memory for the Mobile Outdoor Platform
RAVON
145
PointOnLine the distance of the representative to
the X-axis and the Y-axis of the robot-local coordi-
nate system is computed (See Figure 5). This yields
the robot-local coordinate of the representative which
offers a straightforward check whether the represen-
tative resides in a blind area of the robot. Depend-
ing whether the representative is rather to the left or
the right of the robot it is added to the corresponding
short-term memory. Routine Trim (See Listing 4)
works in analogy just the other way around to sweep
representatives from the short-term memories which
are no longer in the blind angle of the scanners.
Listing 4: Trim the Short-term Memory.
Trim ( s h o r t t e r m m e m o r y , c u r r e n t r o b o t p o s e )
f o r a l l ( r e p r e s e n t a t i v e i n s h o rt te r m m e m or y ) do
y = P o i n t O nL in e ( XAxis ( c u r r e n t r o b o t p o s e ) ,
r e p r e s e n t a t i v e )
x = P o i n t O nL in e ( YAxis ( c u r r e n t r o b o t p o s e ) ,
r e p r e s e n t a t i v e )
# Check w h e t her t h e r e p r e s e n t a t i v e i s
# s t i l l i n t h e b l i n d a n g l e
i f ( Abs ( y ) > Width ( s h or t t e r m m e mo r y ) / 2 OR
Abs ( x ) > Len g t h ( s h o rt te r m m e m or y ) / 2 )
Remove ( r e p r e s e n t a t i v e , s h o rt te r m m e m or y )
f i
od
mirT
Note that the robot pose used during the proce-
dures outlined above currently relies on odometry
only. This is due to the fact that odometry is locally
coherent in comparison to data fused from odometry,
GPS, the inertial measurement unit and the magnetic
field sensor. Eventhough globally more accurate, the
latter three introduce certain local jumps into the pose,
which are a problem for precise local navigation. The
pose drift immanent in odometry on the other hand is
not an issue for the approach presented in this paper
as it is enough to be rather precise over the distance
of a vehicle length. When the robot has passed the
obstacle it is erased from the short-term memory any-
way. In the future visual odometry shall additionally
be called into service in order to gain more accurate
pose information. In particular on slippery surfaces
the authors expect this combination to be a lot more
robust than banking on odometry only.
5 EVALUATION IN
EXPERIMENTS
The short-term memory described before was im-
plemented and integrated into the control system of
Figure 6: Image of the test scenario with overlaid trace.
RAVON. In order to evaluate the approach several test
runs have been executed. First of all autonomous nav-
igation along predefined tracks has been executed. In
this case the vehicle follows the given direction to the
target point as long as there is no hindrance. In case of
obstacles, however, the straight motion is interrupted
by avoidance behaviours. This effect remains active
even if the obstacle is in the blind side region. There-
fore the robot gets back on track only as soon as all
hindrances are left behind.
For clarification of the properties of the short-term
memory in this section a test run is presented where
the robot is remote controlled. In this experiment the
robot is driven through a locally flat environment. All
the time the behaviour-based control system evaluates
sensor information and overrides the given commands
if necessary in order to avoid collisions.
The scenario for this run is hilly grass land as
demonstrated in Figure 6. The image shows some
trees to be avoided as well as some bushes with un-
known load bearing surface. For clarification the ap-
proximate path of the vehicle is overlaid. Four mark-
ers indicate positions referred to in the following de-
scription.
Using the localisation system the trace of the ve-
hicle can be plotted as presented in Figure 7. Some of
the trees and bushes are integrated in order to point
out the sensor data and vehicle reaction. Figure 8
shows the scanner data as well as clusters and sector
values (see Section 4.1) for the four positions indi-
cated by markers in the trace plot. Additionally data
of the obstacle memory at the side of the vehicle is
depicted as blue lines.
Position 1 shows a situation where the robot was
steered into the direction of a tree. After an evad-
ing manoeuvre the robot is now located next to the
tree. The sensor data shows that the obstacle is at
the edge of the side region and would eventually be
hidden. However, the blue lines indicate data stored
in the obstacle memory. Therefore the robot contin-
ues preserving a minimal distance to the obstacle. At
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
146
Figure 7: Top view position diagram of the experiment.
position 2 the robot is in a similar situation. A tree
which was detected by the laser range finder at the
front changed its relative position such that it is lo-
cated at the right side of the robot.
Positions 3 and 4 show the situation where the
robot is manually steered to the left. Due to the ob-
stacles detected and stored in the short-term memory
the evading behaviours force the robot away from the
hindrances. This interaction results in a smooth fol-
lowing of the bushes.
The experiments showed a major improvement of
the reaction to obstacles. Behaviours regarding the
short-term memory prevent the robot of driving into
the direction of hidden obstacles. For static obstacles
this leads to a system reacting on hindrances which
can be located anywhere around the vehicle.
6 CONCLUSIONS AND FUTURE
WORK
In this paper we presented an approach for virtually
extending the coverage of obstacle detection sensor
systems. The formulated algorithm uses information
about the vehicle motion to propagate sensor data into
the blind zone of the robot. Due to a massive reduc-
tion of stored data the system features low memory
usage. Additionally there is no need for calculating
the correspondence between consecutive data making
the approach applicable for realtime scenarios.
Next steps involve the integration of a stereo cam-
Figure 8: Sensor data visualisation during the experiment.
OBSTACLE DETECTION IN MOBILE OUTDOOR ROBOTS - A Short-term Memory for the Mobile Outdoor Platform
RAVON
147
era system. Due to the device’s limited field of view
the presented algorithm leads to an eminent extension
of the supervised area surrounding the vehicle. Addi-
tionally the accuracy of the local positioning system
will be enhanced by visual ego motion estimation.
REFERENCES
Badal, S., Ravela, S., Draper, B., and Hanson, A. (1994). A
practical obstacle detection and avoidance system. In
IEEE Workshop on Applications of Computer Vision.
Bonnafous, D., Lacroix, S., and Simon, T. (2001). Motion
generation for a rover on rough terrain. In IEEE/RSJ
International Conference on Intelligent Robots and
Systems.
Kamon, I., Rivlin, E., and Rimon, E. (1996). A new
range-sensor based globally convergent navigation al-
gorithm for mobile robots. In IEEE International
Conference on Robotics and Automation (ICRA).
Laubach, S. L. and Burdick, J. W. (1999 ). An autonomous
sensor-based path-planner for planetary microrovers.
In IEEE Int. Conf. on Robotics and Automation.
McVea, D. and Pearson, K. (2006). Long-lasting memories
of obstacles guide leg movements in the walking cat.
The Journal of Neuroscience.
Proetzsch, M., Luksch, T., and Berns, K. (2005). Fault-
tolerant behavior-based motion control for offroad
navigation. In 20th IEEE International Conference on
Robotics and Automation (ICRA), Barcelona, Spain.
Sch
¨
afer, H. and Berns, K. (2006). Ravon - an autonomous
vehicle for risky intervention and surveillance. In In-
ternational Workshop on Robotics for risky interven-
tion and environmental surveillance - RISE.
Sch
¨
afer, H., Proetzsch, M., and Berns, K. (2005). Exten-
sion approach for the behaviour-based control system
of the outdoor robot ravon. In Autonome Mobile Sys-
teme.
Schmitz, N., Proetzsch, M., and Berns, K. (2006). Pose esti-
mation in rough terrain for the outdoor vehicle ravon.
In 37th International Symposium on Robotics (ISR).
Shiller, Z. (2000). Obstacle traversal for space exploration.
In IEEE International Conference on Robotics and
Automation.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
148