The detailed results in Table 1 show that although
raw height, step size, and slope all misclassify
around half the points on their own, they can still
enhance the combined classifier. This is because
they detect different types of obstacles. For
example, raw height only looks at the obstacles
height whereas step size only detects changes in
height. In the Combined (with gap removal)
classifier the 4.4% misclassifications have proven to
be acceptable in practice as often it is just small
parts of the road which are mislabelled as obstacles
(the robot simply navigates around the suspicious
terrain).
Table 1: Experimental results from the classifiers
Classifier Misclassifications Percentage
misclassified
Raw height 2334 64.8%
Step size 2156 59.9%
Slope 1657 46.0%
Roughness 663 18.4%
Combined 393 10.9%
Combined (with
gap removal)
157 4.4%
4 SUMMARY
A classifier fusion algorithm was proposed that
enable a mobile robot to locate and travel along a
safe path in a natural environment using a 2D laser
scanner, a civil GPS receiver and odometry.
Although performance of individual classifiers,
based on simple single scan statistics, was not
impressive, the combined set of classifiers were
found to perform quite accetably in classifying a dirt
road from surrounding terrain with less than 5% of
scanned points being misclassified. The performance
was documented in a natural environment. This
work has shown that 2D laser scans can give
considerable information about a semi-structured
natural environment.
Ongoing work includes maintaining an estimate
of the roads position across the trajectory of multiple
robot positions and using this information in the
classifier. Also, quantifying the accuracy of a given
classification without ground truth is being looked
into. Lastly, attempts are being made to detect the
type of road surface currently being navigated on.
This may allow for adaptive tuning of classifiers by
making the thresholds in the hypothesis tests
dependant on the road surface.
REFERENCES
Vandapel, N., 2004. Natural terrain classification using
3D ladar data. 2004 IEEE Int. Conf. On Robotics and
Automation, pp 5117-5122.
Macedo, J., Matthies, L., Manduchi, R., 2000. Ladar-
based Discrimination of Grass from Obstacles for
Autonomous Navigation. Proc. ISER 2000, USA pp
111-120.
Montemerlo, M., Thrun, S., 2004. A Multi-Resolution
Pyramid for Outdoor Robot Terrain Perception, Proc.
AAAI, pp 464-469.
Wettergreen, D., P. Tompkins, C. Urmson, M. D. Wagner
W. L. Whittaker, 2005. Sun-synchronous Robotic
Exploration: Technical Description and Field
Experimentation. Int. Joural of Robotrics Research, vol
24 (1), pp 3-30.
Iagnemma, K., Brooks, C., Dubowsky, S., 2004. Visual,
Tactile, and Vibration-Based Terrain Analysis for
Planetary Rovers. Proc. IEEE Aerospace Conference,
2004, pp 841-848.
Wallace, R., Matsuzaki, K., Goto, Y., Crisman, J., Webb,
J., Kanade, T., 1986. Progress in Robot Road-
Following. Int. Conference on Robotics and
Automation, 1986, pp 1615-1621.
Jochem, T., Pomerleau, D., Thorpe, C., 1993. MANIAC:
A next generation neurally based autonomous road
follower. Proc. of the Int. Conference on IAS-3, pp
592-601.
Bertozzi, M., Broggi, A., 1997. Vision-based vehicle
guidance. IEEE Computer, vol. 30, pp. 49-55, July
1997.
TERRAIN CLASSIFICATION FOR OUTDOOR AUTONOMOUS ROBOTS USING SINGLE 2D LASER SCANS -
Robot perception for dirt road navigation
351