of consecutive images, recorded and processed dur-
ing the navigation through the scenarios 2, 3 and 4,
respectively. Every image was taken before the robot
had to turn to avoid the frontal obstacles, and show
obstacle (in red) and ground points (in blue). Scene
2 presents inter-reflections, specularities, and a lot of
obstacles with regular and irregular shapes. Scene
3 shows a route through a corridor with a very high
textured floor, columns and walls. Scene 4 presents
bad illumination conditions, a lot of inter-reflections
on the floor, and some image regions (walls) with al-
most homogeneous intensities and/or textures, which
results in few distinctive features and poorly edged
obstacles. Walls with a very homogeneous texture
and few distinctivefeatures can present difficulties for
its detection as an obstacle. In all scenes, all obsta-
cle points with a D value between 20mm and 80mm
were left unclassified, except in scene 4, where, only
those obstacle points with a D value between 20mm
and 45mm were filtered out. Pictures (e) to (h), (z) to
(o) and (t) to (x) of figure 4 show the vertical contours
(in orange) comprising obstacle points. See attached
to every picture the angle of the computed steering
vector. For example, in picture (x) objects are out of
the ROI, then, the computed turn angle is 0
◦
(follow
ahead). In picture (e) the obstacles are partially inside
the ROI, so the robot turns to the right (40
◦
). De-
spite scene 4 presents a poor edge map and few SIFT
features, the resulting steering vectors still guide the
robot to the obstacles-free zone. Plots (1) to (4) show
an illustration of the environment and the robot tra-
jectory (blue circle: the starting point; red circle: the
final point) for scenes 1, 2, 3 and 4, respectively. In all
scenes, all features were well classified, obstacle pro-
files were correctly detected and the robot navigated
through the free space avoiding all obstacles. The
steering vector is computed on the image and then it
is used qualitatively to guide the robot.
4 CONCLUSIONS
This paper introduces a new vision-based reactive
navigation strategy addressed to mobile robots. It
employs an IPT-based feature classifier that distin-
guishes between ground and obstacle points with a
success rate greater than 90%. The strategy was
tested on a robot equipped with a wide angle camera
and showed to tolerate scenes with shadows, inter-
reflections, and different types of floor textures or
light conditions. Experimental results obtained sug-
gested a good performance, since the robot was able
to navigate safely. In order to increase the classifier
success rate, future research includes the evaluation
of the classifier sensitivity to the camera resolution or
focal length. The use of various β values, depending
on the image sector that D is being evaluated, can also
increase the classifier performance.
REFERENCES
Bertozzi, M. and Broggi, A. (1997). Vision-based vehicle
guidance. Computer, 30(7):49–55.
Bonin-Font, F., Ortiz, A., and Oliver, G. (2008).
A novel image feature classifier based on in-
verse perspective transformation. Technical re-
port, University of the Balearic Islands. A-01-2008
(http://dmi.uib.es/fbonin).
Borenstein, J. and Koren, I. (1991). The vector field his-
togram - fast obstacle avoidance for mobile robots.
Journal of Robotics and Automation, 7(3):278–288.
Canny, J. (1986). A computational approach to edge detec-
tion. IEEE TPAMI, 8(6):679 – 698.
Duda, R. and Hart, P. (1973). Pattern Classification and
Scene Analysis. John Wiley and Sons Publisher.
Harris, C. and Stephens, M. (1988). Combined corner and
edge detector. In Proc. of the AVC, pages 147–151.
Hartley, R. and Zisserman, A. (2003). Multiple view geom-
etry in computer vision. Cambridge University Press,
ISBN: 0521623049.
Lowe, D. (2004). Distinctive image features from scale-
invariant keypoints. International Journal of Com-
puter Vision, 60(2):91–110.
Ma, G., Park, S., Mller-Schneiders, S., Ioffe, A., and Kum-
mert, A. (2007). Vision-based pedestrian detection -
reliable pedestrian candidate detection by combining
ipm and a 1d profile. In Proc. of the IEEE ITSC, pages
137–142.
Mallot, H., Buelthoff, H., Little, J., and Bohrer, S. (1991).
Inverse perspective mapping simplifies optical flow
computation and obstacle detection. Biological Cy-
bernetics, 64(3):177–185.
Mikolajczyk, K. and Schmid, C. (2005). A perfor-
mance evaluation of local descriptors. IEEE TPAMI,
27(10):1615–1630.
Rodrigo, R., Chen, Z., and Samarabandu, J. (2006). Feature
motion for monocular robot navigation. In Proc. of the
ICIA, pages 201–205.
Zhou, J. and Li, B. (2006). Homography-based ground de-
tection for a mobile robot platform using a single cam-
era. In Proc. of the IEEE ICRA, pages 4100–4101.
A NOVEL VISION-BASED REACTIVE NAVIGATION STRATEGY BASED ON INVERSE PERSPECTIVE
TRANSFORMATION
145