Generation of Data Sets Simulating Different Kinds of Cameras in Virtual Environments

Yerai Berenguer, Luis Payá, Oscar Reinoso, Adrián Peidró, Luis M. Jiménez

2016

Abstract

In this paper a platform to create different kinds of data sets from virtual environments is presented. These data sets contain some information about the visual appearance of the environment and the distance from some reference positions to all the objects. Robot localization and mapping using images are two active fields of research and new algorithms are continuously proposed. These algorithms have to be tested with several sets of images to validate them. This task can be made using actual images; however, sometimes when a change in the parameters of the vision system is needed to optimize the algorithms, this system must be replaced and new data sets must be captured. This supposes a high cost and slowing down the first stages of the development. The objective of this work is to develop a versatile tool that permits generating data sets to test efficiently mapping and localization algorithms with mobile robots. Another advantage of this platform is that the images can be generated from any position of the environment and with any rotation. Besides, the images generated have not noise; this is an advantage since it allows carrying out a preliminary test of the algorithms under ideal conditions. The virtual environment can be created easily and modified depending on the desired characteristics. At last, the platform permits carrying out another advanced tasks using the images and the virtual environment.

References

  1. Amorós, F., Payá, L., Reinoso, O., and Valiente, D. (2014). Towards relative altitude estimation in topological navigation tasks using the global appearance of visual information. In VISAPP 2014, International Conference on Computer Vision Theory and Applications, volume 1, pages 194-201.
  2. Andreasson, H. and Lilienthal, A. (2010). 6d scan registration using depth-interpolated local image features. Robotics and Autonomous Systems, 58(2).
  3. Bay, H., Tuytelaars, T., and Gool, L. (2006). Surf: Speeded up robust features. Computer Vision at ECCV, 3951:404-417.
  4. Berenguer, Y., Payá, L., Ballesta, M., and Reinoso, O. (2015). Position estimation and local mapping using omnidirectional images and global appearance descriptors. Sensors, 15(10):26368.
  5. Burbridge, C., Spacek, L., and Park, W. (2006). Omnidirectional vision simulation and robot localisation. Proceedings of TAROS, 2006:32-39.
  6. Lowe, D. (1999). Object recognition from local scaleinvariant features. In ICCV 1999, International Conference on Computer Vision, volume 2, pages 1150- 1157.
  7. Maohai, L., Han, W., Lining, S., and Zesu, C. (2013). Robust omnidirectional mobile robot topological navigation system using omnidirectional vision. Engineering Applications of Artificial Intelligence, 26(8):1942 - 1952.
  8. Mondragon, I., Olivares-Ménded, M., Campoy, P., Martínez, C., and Mejias, L. (2010). Unmanned aerial vehicles uavs attitude, height, motion estimation and control using visual systems. Autonomous Robots, 29:17-34.
  9. Nene, S. and Nayar, S. (1998). Stereo with mirrors. In Proceedings of the 6th Internation Conference on Computer Vision, Bombay, India.
  10. Payá, L., Amorós, F., Fernández, L., and Reinoso, O. (2014). Performance of global-appearance descriptors in map building and localization using omnidirectional vision. Sensors, 14(2):3033-3064.
  11. Payá, L., Fernández, L., Gil, L., and Reinoso, O. (2010). Map building and monte carlo localization using global appearance of omnidirectional images. Sensors, 10(12):11468-11497.
  12. Peasley, B. and Birchfield, S. (2015). Rgbd point cloud alignment using lucas-kanade data association and automatic error metric selection. IEEE Transactions on Robotics, 31(6):1548-1554.
  13. Perazzi, F., Sorkine-Hornung, A., Zimmer, H., Kaufmann, P., Wang, O., Watson, S., and Gross, M. (2015). Panoramic video from unstructured camera arrays. Computer Graphics Forum, 34(2):57-68.
  14. Radon, J. (1917). Uber die bestimmung von funktionen durch ihre integralwerte langs gewisser mannigfaltigkeiten. Berichte Sachsische Akademie der Wissenschaften, 69(1):262-277.
  15. Valiente, D., Gil, A., Fernández, L., and Reinoso, O. (2014). A comparison of ekf and sgd applied to a view-based slam approach with omnidirectional images. Robotics and Autonomous Systems, 62(2):108 - 119.
  16. Wang, X., Tang, J., Niu, J., and Zhao, X. (2016). Visionbased two-step brake detection method for vehicle collision avoidance. Neurocomputing, 173, Part 2:450 - 461.
  17. Winters, N., Gaspar, J., Lacey, G., and Santos-Victor, J. (2000). Omni-directional vision for robot navigation. IEEE Workshop on Omnidirectional Vision, pages 21- 28.
  18. Wu, J., Zhang, H., and Guan, Y. (2014). An efficient visual loop closure detection method in a map of 20 million key locations. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 861-866.
  19. Zivkovic, Z. and Booij, O. (2006). How did we built our hyperbolic mirror omni-directional camera practical issues and basic geometry. Intelligent Systems Laboratory Amsterdam, University of Amsterdam, IAS technical report IAS-UVA-05-04.
Download


Paper Citation


in Harvard Style

Berenguer Y., Payá L., Reinoso O., Peidró A. and Jiménez L. (2016). Generation of Data Sets Simulating Different Kinds of Cameras in Virtual Environments . In Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, ISBN 978-989-758-198-4, pages 352-359. DOI: 10.5220/0005982403520359


in Bibtex Style

@conference{icinco16,
author={Yerai Berenguer and Luis Payá and Oscar Reinoso and Adrián Peidró and Luis M. Jiménez},
title={Generation of Data Sets Simulating Different Kinds of Cameras in Virtual Environments},
booktitle={Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,},
year={2016},
pages={352-359},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005982403520359},
isbn={978-989-758-198-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,
TI - Generation of Data Sets Simulating Different Kinds of Cameras in Virtual Environments
SN - 978-989-758-198-4
AU - Berenguer Y.
AU - Payá L.
AU - Reinoso O.
AU - Peidró A.
AU - Jiménez L.
PY - 2016
SP - 352
EP - 359
DO - 10.5220/0005982403520359