Authors:
Svenja Kahn
1
;
Harald Wuest
1
and
Dieter W. Fellner
2
Affiliations:
1
Fraunhofer IGD, Germany
;
2
TU Darmstadt Interactive Graphics Systems Group, Germany
Keyword(s):
Markerless camera tracking, Model-based tracking, 3d reconstruction, Time-of-flight, Sensor fusion, Augmented
reality.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Geometry and Modeling
;
Human-Computer Interaction
;
Image-Based Modeling
;
Methodologies and Methods
;
Model-Based Object Tracking in Image Sequences
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Physiological Computing Systems
;
Real-Time Vision
;
Software Engineering
Abstract:
The most challenging algorithmical task for markerless Augmented Reality applications is the robust estimation of the camera pose. With a given 3D model of a scene the camera pose can be estimated via model-based camera tracking without the need to manipulate the scene with fiducial markers. Up to now, the bottleneck of model-based camera tracking is the availability of such a 3D model. Recently time-of-flight cameras were developed which acquire depth images in real time. With a sensor fusion approach combining the color data of a 2D color camera and the 3D measurements of a time-of-flight camera we acquire a textured 3D model of a scene. We propose a semi-manual reconstruction step in which the alignment of several submeshes with a mesh processing tool is supervised by the user to ensure a correct alignment. The evaluation of our approach shows its applicability for reconstructing a 3D model which is suitable for model-based camera tracking even for objects which are difficult to m
easure reliably with a time-of-flight camera due to their demanding surface characteristics.
(More)