REAL-TIME LABEL INSERTION IN LIVE VIDEO THROUGH ONLINE TRIFOCAL TENSOR ESTIMATION

Robert Laganière, Johan Gottin

Abstract

We present an augmented reality application that can supplement a live video sequence with virtual labels associated with the scene content captured by an agile video camera moving inside an explored environment. The method proposed is composed of two main phases. First, a matching phase where reference images are successively compared with the captured images. And, second, a tracking phase that aims at maintaining the correspondence between a successfully matched reference image and each frame of a captured sequence. Labels insertion is based on projective transfer using the trifocal tensor, this one being estimated and continuously updated as the camera is moved inside the scene.

References

  1. Bell, B., Hollerer, T., and Feiner, S. (2002). An annotated situation-awareness aid for augmented reality. In Proc:UIST ACM Symp. on user interface sofware and technology, pages 213-216.
  2. Boufama, B. and Habed, A. (2005). Registration and tracking in the context of ar. ICGST Int. Journal on Graphics Vision and Image Processing, V3.
  3. Chia, K., Cheok, A., and Prince, S. (2002). Online 6 dof augmented reality registration from natural features. In Proc. International Symposium on Mixed and Augmented Reality(ISMAR), pages 223-230.
  4. Fusiello, A., Trucco, E., Tommasini, T., and Roberto, V. (1999). Improving feature tracking with robust statistics. Pattern Analysis and Applications, 2:312-320.
  5. Hartley, R. and Zisserman, A. (2000). Multiple View Geometry in Computer Vision. Cambridge University Press.
  6. Kutulakos, K. and Vallino, J. (1998). Calibration-free augmented reality. IEEE trans. on Visualization and Computer Graphics, 4:1-20.
  7. Li, J., Laganière, R., and Roth, G. (2004). Online estimation of trifocal tensors for augmenting live video. In IEEE/ACM Symp. on Mixed and Augmented Reality, pages 182-190.
  8. Lourakis, M. and Argyos, A. (2004). Vision-based camera motion recovery for augmented reality. In Computer Graphics Int. Conference, pages 569-576.
  9. Newman, J., Ingram, D., and Hopper, A. (2001). Augmented reality in a wide area sentient environmemt. In Int. Symp. on Augmented Reality, pages 77-86.
  10. Roth, G. and Whitehead, A. (2000). Using projective vision to find camera positions in an image sequence. In Proc. of Vision Interface, pages 225-232.
  11. Vacchetti, L., Lepetit, V., and Fua, P. (2004). Stable realtime 3d tracking using online and offline information. IEEE trans. on Pattern Analysis and Machine Intelligence, 26:1385-1391.
  12. Vincent, E. and Laganière, R. (2001). Matching feature points in stereo pairs: A comparative study of some matching strategies. Machine Graphics and Vision, 10:237-259.
  13. Vincent, E. and Laganière, R. (2002). Matching feature points for telerobotics. In IEEE Int. Workshop on Haptic Virtual Env. and Applications, pages 13-18.
Download


Paper Citation


in Harvard Style

Laganière R. and Gottin J. (2006). REAL-TIME LABEL INSERTION IN LIVE VIDEO THROUGH ONLINE TRIFOCAL TENSOR ESTIMATION . In Proceedings of the First International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, ISBN 972-8865-40-6, pages 435-441. DOI: 10.5220/0001377504350441


in Bibtex Style

@conference{visapp06,
author={Robert Laganière and Johan Gottin},
title={REAL-TIME LABEL INSERTION IN LIVE VIDEO THROUGH ONLINE TRIFOCAL TENSOR ESTIMATION},
booktitle={Proceedings of the First International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP,},
year={2006},
pages={435-441},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001377504350441},
isbn={972-8865-40-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the First International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP,
TI - REAL-TIME LABEL INSERTION IN LIVE VIDEO THROUGH ONLINE TRIFOCAL TENSOR ESTIMATION
SN - 972-8865-40-6
AU - Laganière R.
AU - Gottin J.
PY - 2006
SP - 435
EP - 441
DO - 10.5220/0001377504350441