Figure 7: The depth map from the stereo system
Figure 8: 3D images perceived by robots
5 CONCLUSION AND FURTHER
WORK
Stereoscopic systems for robot navigation and
robot networks are currently possible using
structured light and low-resolution real-time
devices. Although these devices don’t have the
same performances as the human depth perception
system, they seem efficient for simple applications
such as obstacle avoidance and co-ordination
control for multiple robots. The system is low cost
and easily implemented for autonomous systems.
The active vision system can adapt different
lighting environment and camera intrinsic and
extrinsic parameters by using our normalisation
algorithms (Finlayson and Tian 1999) and data
fusion from the redundancy data of the structured
light based stereo vision.
Until recently certain distributed systems
aspects of multi-robot teams were not given much
attention. A sensing approach has been proposed
for cooperative robotics. In the future, the system
will be integrated with panoramic stereo vision
systems for wide range of position monitoring
(Bunschoten and Kröse 2002). Further data fusion
for robot networks or sensor networks will be
investigated (Büker etc 2001).
REFERENCES
Büker U., Drüe S., Götze N., Hartmann G., Kalkreuter
B., Stemmer R. and Trapp R., 2001. Vision-based
control of an autonomous disassembly station,
Robotics and Autonomous Systems, Volume 35,
Issues 3-4, Pages 179-18.9.
Bunschoten R. and Kröse B., 2002. 3D scene
reconstruction from cylindrical panoramic images,
Robotics and Autonomous Systems, Volume 41,
Issues 2-3, Pages 111-118.
Drocout C., Delahoche L., Pegard C., Clerentin A.,
1999. Mobile robot localisation based on an
omnidirectional stereoscopic vision perception
system, Proc. Of the 1999 IEEE Conference on
Robotics and Automation, Detroit, USA, pp 1329-
1334.
Finlayson G D. and Tian G Y, 1999. Colour
normalization for colour object recognition”,
International J. of Pattern Recognition and
Artificial Intelligence, Vol.13, No.8, pp 1271-
1285.
Gledhill D., Tian G. Y., Taylor D. and Clarke D.,
2004,
3D Reconstruction of a Region of Interest
Using Structured Light and Stereo Panoramic
Images, accepted for IV04, London.
Guivant J., Eduardo Nebot E. and Baiker S., 2000.
Autonomous navigation and map building using
laser range senosors in outdoor applications, Journal
robotic systems, Vol 17, No. 10, , pp 565-583.
Li, Y.F., Lu, R.S., 2004. Uncalibrated Euclidean 3-D
Reconstruction Using an Active Vision System,
Volume: 20, Issue: 1, pp. 15- 25.
Lim J. H. and. Leonard J. J, 2000. Mobile Robot
Relocation from Echolocation Constraints, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 22, No. 9, pp. 1035-1041.
Murray D. and Jennings C., 1997. Stereo vision based
mapping for a mobile robot, In Proc. IEEE Conf. On
Robotics and Automation.
Nitzan D., 1988.
Three-Dimensional Vision Structure
for Robot Applications,
IEEE Transactions on
Pattern Analysis and Machine Intelligence
,Vol.
10, No. 3.
Tian, G. Y., Gledhill, D., Taylor, D., 2003.
Comprehensive interest points based imaging
mosaic.
Pattern Recognition Letters 24, (9-10):
1171-1179.
Xiao D., Song M., Ghosh B. K., Xi N., Tarn T. J. and
Yu Z., 2004. Real-time integration of sensing,
planning and control in robotic work-cells, Control
Engineering Practice,
Volume 12, Issue 6, Pages
653-663.
STRUCTURED LIGHT BASED STEREO VISION FOR COORDINATION OF MULTIPLE ROBOTS
161