Author:
Andrzej Bieszczad
Affiliation:
California State University Channel Islands, United States
Keyword(s):
Robot, Navigation, Localization, Cue Identification, Neural Networks, Support Vector Machines, Classification, Pattern Recognition.
Related
Ontology
Subjects/Areas/Topics:
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Machine Learning in Control Applications
;
Mobile Robots and Autonomous Systems
;
Perception and Awareness
;
Robotics and Automation
Abstract:
In this paper, we report on our ongoing efforts to build a cue identifier for mobile robot navigation using a
simple one-plane LIDAR laser scanner and machine learning techniques. We used simulated scans of
environmental cues to which we applied various levels of Gaussian distortion to test a number of models the
effectiveness of training and the response to noise in input data. We concluded that in contrast to back
propagation neural networks, SVM-based models are very well suited for classifying cues, even with
substantial Gaussian noise, while still preserving efficiency of training even with relatively large data sets.
Unfortunately, models trained with data representing just one stationary point of view of a cue are inaccurate
when tested on data representing different points of view of the cue. Although the models are resilient to noisy
data coming from the vicinity of the original point of view used in training, data that originates in a point of
view shifted forward or backwa
rd (as would be the case with a mobile robot) proved much more difficult to
classify correctly. In the research reported here, we used an expanded set of synthetic training data
representing three view points corresponding to three positions in robot movement in relation to the location
of the cues. We show that by using the expanded data the accuracy of cue classification is dramatically
increased for test data coming from any of the points.
(More)