Range and Vision Sensors Fusion for Outdoor 3D Reconstruction

Ghina El Natour, Omar Ait Aider, Raphael Rouveure, François Berry, Patrice Faure

2015

Abstract

The conscience of the surrounding environment is inevitable task for several applications such as mapping, autonomous navigation and localization. In this paper we are interested by exploiting the complementarity of a panoramic microwave radar and a monocular camera for 3D reconstruction of large scale environments. Considering the robustness to environmental conditions and depth detection ability of the radar on one hand, and the high spatial resolution of a vision sensor on the other hand, makes these tow sensors well adapted for large scale outdoor cartography. Firstly, the system model of the two sensors is represented and a new 3D reconstruction method based on sensors geometry is introduced. Secondly, we address the global calibration problem which consists in finding the exact transformation between radar and camera coordinate systems. The method is based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Both methods have been validated with synthetic and real data.

References

  1. Alessandretti, G., Broggi, A., and Cerri, P. (2007). Vehicle and guard rail detection using radar and vision data fusion. Intelligent Transportation Systems, IEEE Transactions on, 8(1):95-105.
  2. Austin, C. D., Ertin, E., and Moses, R. L. (2011). Sparse signal methods for 3-d radar imaging. Selected Topics in Signal Processing, IEEE Journal of, 5(3):408-423.
  3. Bertozzi, M., Bombini, L., Cerri, P., Medici, P., Antonello, P. C., and Miglietta, M. (2008). Obstacle detection and classification fusing radar and vision. In Intelligent Vehicles Symposium, 2008 IEEE, pages 608-613. IEEE.
  4. Bhagawati, D. (2000). Photogrammetry and 3-d reconstruction-the state of the art. ASPRS Proceedings, Washington, DC.
  5. Bombini, L., Cerri, P., Medici, P., and Alessandretti, G. (2006). Radar-vision fusion for vehicle detection. In Proceedings of International Workshop on Intelligent Transportation, pages 65-70.
  6. Bouguet, J.-Y. (2004). Camera calibration toolbox for matlab.
  7. Forlani, G., Nardinocchi, C., Scaioni, M., and Zingaretti, P. (2006). Complete classification of raw lidar data and 3d reconstruction of buildings. Pattern Analysis and Applications, 8(4):357-374.
  8. Gallup, D., Frahm, J.-M., Mordohai, P., Yang, Q., and Pollefeys, M. (2007). Real-time plane-sweeping stereo with multiple sweeping directions. In Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on, pages 1-8. IEEE.
  9. Grimes, D. M. and Jones, T. O. (1974). Automotive radar: A brief review. Proceedings of the IEEE, 62(6):804- 822.
  10. Haselhoff, A., Kummert, A., and Schneider, G. (2007). Radar-vision fusion for vehicle detection by means of improved haar-like feature and adaboost approach. In Proceedings of EURASIP, pages 2070-2074.
  11. Hofmann, U., Rieder, A., and Dickmanns, E. D. (2003). Radar and vision data fusion for hybrid adaptive cruise control on highways. Machine Vision and Applications, 14(1):42-49.
  12. Kordelas, G., Perez-Moneo Agapito, J., Vegas Hernandez, J., and Daras, P. (2010). State-of-the-art algorithms for complete 3d model reconstruction. Engage Summer School.
  13. Pancham, A., Tlale, N., and Bright, G. (2011). Application of kinect sensors for slam and datmo.
  14. Pollefeys, M., Nistér, D., Frahm, J.-M., Akbarzadeh, A., Mordohai, P., Clipp, B., Engels, C., Gallup, D., Kim, S.-J., Merrell, P., et al. (2008). Detailed real-time urban 3d reconstruction from video. International Journal of Computer Vision, 78(2-3):143-167.
  15. Rouveure, R., Monod, M., and Faure, P. (2009). High resolution mapping of the environment with a ground-based radar imager. In Radar ConferenceSurveillance for a Safer World, 2009. RADAR. International, pages 1-6. IEEE.
  16. Roy, A., Gale, N., and Hong, L. (2009). Fusion of doppler radar and video information for automated traffic surveillance. In Information Fusion, 2009. FUSION'09. 12th International Conference on, pages 1989-1996. IEEE.
  17. Royer, E., Lhuillier, M., Dhome, M., and Lavest, J.-M. (2007). Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, 74(3):237-260.
  18. Schindhelm, C. K. (2012). Evaluating slam approaches for microsoft kinect. In ICWMC 2012, The Eighth International Conference on Wireless and Mobile Communications, pages 402-407.
  19. Skolnik, M. I. (2001). Introduction to radar systems.
  20. Smisek, J., Jancosek, M., and Pajdla, T. (2013). 3d with kinect. In Consumer Depth Cameras for Computer Vision, pages 3-25. Springer.
  21. Sturm, P. (2000). A method for 3d reconstruction of piecewise planar objects from single panoramic images. In Omnidirectional Vision, 2000. Proceedings. IEEE Workshop on, pages 119-126. IEEE.
  22. Sugimoto, S., Tateda, H., Takahashi, H., and Okutomi, M. (2004). Obstacle detection using millimeter-wave radar and its visualization on image sequence. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 3, pages 342-345. IEEE.
  23. Wang, T., Zheng, N., Xin, J., and Ma, Z. (2011). Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications. Sensors, 11(9):8992-9008.
Download


Paper Citation


in Harvard Style

El Natour G., Ait Aider O., Rouveure R., Berry F. and Faure P. (2015). Range and Vision Sensors Fusion for Outdoor 3D Reconstruction . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-089-5, pages 202-208. DOI: 10.5220/0005324302020208


in Bibtex Style

@conference{visapp15,
author={Ghina El Natour and Omar Ait Aider and Raphael Rouveure and François Berry and Patrice Faure},
title={Range and Vision Sensors Fusion for Outdoor 3D Reconstruction},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)},
year={2015},
pages={202-208},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005324302020208},
isbn={978-989-758-089-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)
TI - Range and Vision Sensors Fusion for Outdoor 3D Reconstruction
SN - 978-989-758-089-5
AU - El Natour G.
AU - Ait Aider O.
AU - Rouveure R.
AU - Berry F.
AU - Faure P.
PY - 2015
SP - 202
EP - 208
DO - 10.5220/0005324302020208