multiplied by the correlation of the current left
image with the matched left one, and the same with
the right offset. This schema can be useful when the
images of each camera are quite different or when
there is an obstacle or occlusion that affects just to
one of the cameras. In these cases, the control action
of the camera that has the problem will be multiplied
by a very low quantity, so it will have a poor effect
on the robot navigation. As well, the experiments
that have been carried out show how this control
schema improves slightly the results that offers the
differential one. These results are shown in fig. 5.
3 CONCLUSIONS AND FUTURE
WORK
A solution to the problem of the continuous
navigation using an appearance-based approach has
been proposed. Several control schemas have been
tested, including P, PD and PD with variable
parameters controllers. With these laws, the robot is
able to find itself and follow the route in a band of
about two meters around the pre-recorded route. It
can be done although the scene suffers small
changes (illumination, position of some objects,
partial occlusions in one of the cameras). We are
now working in other control methods, such fuzzy
logic.
The main drawback of this navigation method
arises when the scenes are highly unstructured and
varying. In this case, it is necessary to increase
resolution to get an acceptable accuracy in
navigation. The solution proposed is based in the
reduction of the information to store using PCA
subspaces. This method shows two big advantages:
the size of the vectors to compare is much smaller
and we can calculate the majority of the information
off-line so we have it available during navigation.
Besides, the size of the vectors is independent of the
resolution of the images so, it is expected to work
well in very unstructured environments.
ACKNOWLEDGEMENTS
This work has been supported by Ministerio de
Educación y Ciencia through project DPI2004-
07433-C02-01. ‘Herramientas de teleoperación
Colaborativa. Aplicación al Control cooperativo de
Robots’.
REFERENCES
Jones, S.D., Andersen, C., Crowley, J.L., 1997.
Appearance based processes for visual navigation. In
Proceedings of the IEEE International Conference on
Intelligent Robots and Systems. 551-557.
Lebegue, X., Aggarwal, J.K., 1993. Significant line
segments for an indoor mobile robot. In IEEE
Transactions on Robotics and Automation. Vol. 9, nº 6,
801-815.
Maeda, S., Kuno, Y., Shirai, Y., 1997. Active navigation
vision based on eigenspace analysis. In Proceedings
IEEE International Conference on Intelligent Robots
and Systems. Vol 2, 1018-1023.
Matsumoto, Y., Inaba, M., Inoue, H., 1996. Visual
navigation using view-sequenced route representation.
In Proceedings of IEEE International conference on
Robotics and Automation. Vol 1, 83-88.
Ohno, T., Ohya, A., Yuta, S., 1996. Autonomous
navigation for mobile robots referring pre-recorded
image sequence. In Proceedings IEEE International.
Conference on Intelligent Robots and Systems. Vol 2,
672-679.
Paya, L., Reinoso, O., Gil, A., Garcia, N., Vicente, M.A.,
2005. Accepted for 13
th
International Conference on
Image Analysis and Processing.
Swain-Oropeza, R., Devy, M., Cadenat, V., 1999.
Controlling the execution of a visual servoing task. In
Journal of Intelligent and Robotic Systems. Vol 25, No
4, 357-369.
Zhou, C., Wei, T., Tan, T., 2003. Mobile robot self-
localization based on global visual appearance features.
In Proceedings of the 2003 IEEE International
Conference on Robotics & Automation. 1271-1276.
Figure 5: Average correlation during navigation for
different control schemes. (a) P controller with K
l
= K
r
=
0.04. (b) P controller with derivative effect in advance
speed. K
2
= 0.04. (c) PD controller with K
2
= 0.04 and
K
2D
= 0.04. (d) PD controller with variable parameters, K
2
= 0.04 and K
2D
= 0.04
ICINCO 2005 - ROBOTICS AND AUTOMATION
446