COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY

Tom Banwell, Andrew Calway

Abstract

One of the major limitations of vision based mapping and localisation is its inability to scale and operate over wide areas. This restricts its use in applications such as Augmented Reality. In this paper we demonstrate that the integration of a second absolute positioning sensor addresses this problem, allowing independent local maps to be combined within a global coordinate frame. This is achieved by aligning trajectories from the two sensors which enables estimation of the relative position, orientation and scale of each local map. The second sensor also provides the additional benefit of reducing the search space required for efficient relocalisation. Results illustrate the method working for an indoor environment using an ultrasound position sensor, building and combining a large number of local maps and successfully relocalising as users move arbitrarily within the map. To show the generality of the proposed method we also demonstrate the system building and aligning local maps in an outdoor environment using GPS as the position sensor.

References

  1. Castle, R. O., Klein, G., and Murray, D. W. (2008). Videorate localization in multiple maps for wearable augmented reality. In Proc 12th IEEE Int Symp on Wearable Computers 2008, pages 15-22.
  2. Chekhlov, D., Gee, A., Calway, A., and Mayol-Cuevas, W. (2007). Ninja on a plane: Automatic discovery of physical planes for augmented reality using visual slam. In International Symposium on Mixed and Augmented Reality (ISMAR).
  3. Chekhlov, D., Mayol-Cuevas, W., and Calway, A. (2008). Appearance based indexing for relocalisation in realtime visual slam. In 19th Bristish Machine Vision Conference, pages 363-372. BMVA.
  4. Chekhlov, D., Pupilli, M., Mayol-Cuevas, W., and Calway, A. (2006). Real-time and robust monocular slam using predictive multi-resolution descriptors. In 2nd International Symposium on Visual Computing.
  5. Clemente, L., Davison, A., Reid, I., Neira, J., and Tardos, J. (2007). Mapping large loops with a single hand-held camera. In Robotics: Science and Systems.
  6. Davison, A., Reid, I., Molton, N., and Stasse, O. (2007). MonoSLAM: Real-Time Single Camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):1052-1067.
  7. Klein, G. and Murray, D. (2007). Parallel tracking and mapping for small AR workspaces. In Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'07), Nara, Japan.
  8. Newman, J., Ingram, D., and Hopper, A. (2001). Augmented reality in a wide area sentient environment.
  9. Park, Y., Lepetit, V., and Woo, W. (2008). Multiple 3d object tracking for augmented reality. In Proc. Seventh IEEE and ACM International Symposium on Mixed and Augmented Reality, pages 117-120.
  10. Piekarski, W., Avery, B., Thomas, B., and Malbezin, P. (2004). Integrated head and hand tracking for indoor and outdoor augmented reality. In VR 7804: Proceedings of the IEEE Virtual Reality 2004, pages 11-276, Chicago, IL.
  11. Pinies, P., Lupton, T., Sukkarieh, S., and Tardos, J. D. (2007). Inertial aiding of inverse depth slam using a monocular camera. In IEEE International Conference on Robotics and Automation, Roma, Italy.
  12. Pinies, P. and Tardos, J. (2008). Large-scale slam building conditionally independent local maps: Application to monocular vision. IEEE Transactions on Robotics, 24(5):1094-1106.
  13. Pupilli, M. and Calway, A. (2006). Real-time camera tracking using known 3d models and a particle filter. In International Conference on Pattern Recognition.
  14. Randell, C. and Muller, H. (2001). Low cost indoor positioning system. In Ubicomp 2001: Ubiquitous Computing, pages 42-68.
  15. Umeyama, S. (1991). Least-Squares Estimation of Transformation Parameters Between Two Point Patterns. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 13, pages 376-380.
Download


Paper Citation


in Harvard Style

Banwell T. and Calway A. (2010). COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY . In Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2010) ISBN 978-989-674-026-9, pages 353-357. DOI: 10.5220/0002830803530357


in Bibtex Style

@conference{grapp10,
author={Tom Banwell and Andrew Calway},
title={COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY},
booktitle={Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2010)},
year={2010},
pages={353-357},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002830803530357},
isbn={978-989-674-026-9},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2010)
TI - COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY
SN - 978-989-674-026-9
AU - Banwell T.
AU - Calway A.
PY - 2010
SP - 353
EP - 357
DO - 10.5220/0002830803530357