extended to any number of different IDs, the only lim-
itation is the number of different colors that mezza-
nine is able to detect.
Once the robot’s orientation is calculated, all the
values necessary for determining its pose (x,y,angle)
in the environment with respect to the leader are avail-
able. This pose then is translated to the environment
system. This process is obvliuos. Then, this pose is
then sent to the corresponding robot so that it knows
its position.
4 CONCLUSIONS
A new method for the visual localization of robots has
been implemented. Using a very common and simple
target it is possible to localize one robot and deter-
mine its position and orientation with regard to the
robot with the camera and of course in the environ-
ment.
The main advantage consists on having a very
simple object, by means of the corresponding geo-
metric constrints, it is possible to stablish not only
the distance to the target robot, but also the orienta-
tion. Regarding to the orientacion, by means of a two
simultaneous readings process, it is possible to elimi-
nate the accuracy errors produced by the specific fea-
tures of the object used as target.
The localization of the robots by means of the col-
ored targets has been a hazardous work due to the sen-
sitivity of the vision system to the lighting conditions.
ACKNOWLEDGEMENTS
Support for this research is provided by the Fundaci´o
Caixa Castell´o - Bancaixa under project P1-1A2008-
12.
REFERENCES
Atienza, R. and Zelinsky, A. (2001). A practical zoom cam-
era calibration technique: an application of active vi-
sion for human-robot interaction. In Proceedings of
the Australian Conference on Robotics and Automa-
tion, pages 85–90.
Clady, X., Collange, F., Jurie, F., and Martinet, P. (2001).
Objet tracking with a pan tilt zoom camera, applica-
tion to car driving assistance. In Proceedings of the In-
ternational Conference on Advanced Robotics, pages
1653–1658.
Cox, I. and Wilfong, G. (1990). Autonomous Robot Vehi-
cles. Springer Verlag.
Cubber, G., Berrabah, S., and Sahli, H. (2003). A bayesian
approach for color consistency based visual servoing.
In Proceedings of the International Conference on Ad-
vanced Robotics, pages 983–990.
Das, K., Fierro, R., Kumar, V., Ostrowski, J. P., Spletzer,
J., and Taylor, C. (2002). A vision-based formation
control framework. IEEE Transactions on Robotics
and Automation, 18(5):813–825.
Fox, D., Burgard, W., Kruppa, H., and Thrun, S. (2000).
A probabilistic approach to collaborative multi-robot
localization. Autonomous Robots, 8(3).
Fredslund, J. and Mataric, M. (2002). A general, local al-
gorithm for robot formationss. IEEE Transactions on
Robotics and Automation (Special Issue on Advances
in Multi-Robot Systems), 18(5):837–846.
Hosoda, K., Moriyama, H., and Asada, M. (1995). Vi-
sual servoing utilizing zoom mechanism. In Pro-
ceedings of the International Conference on Advanced
Robotics, pages 178–183.
Howard, A. (2002). Mezzanine user manual; version 0.00.
Technical Report IRIS-01-416, USC Robotics Labo-
ratory, University of Sourthern California.
Michaud, F., Letourneau, D., Guilbert, M., and Valin, J.
(2002). Dynamic robot formations using directional
visual perception. In Proceedings of the IEEE/RSJ In-
ternational Conference on Intelligent Robots and Sys-
tems, pages 2740–2745.
Nebot, P. and Cervera, E. (2005). A framework for the
development of cooperative robotic applications. In
Proceedings of the 12th International Conference on
Advanced Robotics, pages 901–906.
Renaud, P., Cervera, E., and Martinet, P. (2004). Towards a
reliable vision-based mobile robot formation control.
In Proceedings of the IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems, pages 3176–
3181.
Sarcinelli-Filho, M., Bastos-Filho, T., and Freitas, R.
(2003). Mobile robot navigation via reference recog-
nition based on ultrasonic sensing and monocular vi-
sion. In Proceedings of the International Conference
on Advanced Robotics, pages 204–209.
SELF-LOCALIZATION OF A TEAM OF MOBILE ROBOTS BY MEANS OF COMMON COLORED TARGETS
279