Authors:
David J. T. Boas
1
;
Sergii Poltaretskyi
2
;
Jean-Yves Ramel
3
;
Jean Chaoui
2
;
Julien Berhouet
4
and
Mohamed Slimane
3
Affiliations:
1
Laboratoire d’Informatique Fondamentale Appliquée de Tours(LIFAT), Université François Rabelais, 37200 Tours, France, IMASCAP, 29280, Plouzané and France
;
2
IMASCAP, 29280, Plouzané and France
;
3
Laboratoire d’Informatique Fondamentale Appliquée de Tours(LIFAT), Université François Rabelais, 37200 Tours and France
;
4
Laboratoire d’Informatique Fondamentale Appliquée de Tours(LIFAT), Université François Rabelais, 37200 Tours, France, Service Orthopédie 1, Centre Hospitalier Universitaire de Tours, Avenue de la République, 37044 Tours Cedex 09 and France
Keyword(s):
Computer Vision, RGB-Depth Camera, Camera Calibration, Spherical Object.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Applications and Services
;
Camera Networks and Vision
;
Computer Vision, Visualization and Computer Graphics
;
Device Calibration, Characterization and Modeling
;
Geometry and Modeling
;
Image Formation and Preprocessing
;
Image Formation, Acquisition Devices and Sensors
;
Image-Based Modeling
;
Pattern Recognition
;
Software Engineering
Abstract:
RGB-Depth calibration refers to the estimation of both RGB and Depth camera parameters, as well as their relative pose. This step is critical to align streams correctly. However, in the literature there is still no general method for accurate RGB-D calibration. Recently, promising methods proposed to use spheres to perform the calibration, the centers of these objects being well distinguishable by both cameras. This paper proposes a new minimization function which constrains spheres centers positions by requiring the knowledge of sphere radius, and a previously calibrated RGB camera. We show the limits of previous approaches and their correction with the proposed method. Results demonstrate an improvement in relative pose estimation compared to the original method on the selected datasets.