Authors:
Masahiro Oida
;
Fumihiko Sakaue
and
Jun Sato
Affiliation:
Nagoya Institute of Technology, Japan
Keyword(s):
Mixed reality, Photometric calibration, Geometric calibration.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Geometry and Modeling
;
Image and Video Analysis
;
Image-Based Modeling
;
Pattern Recognition
;
Software Engineering
;
Surface Geometry and Shape
Abstract:
For mixed reality and other applications, it is very important to achieve photometric and geometric consistency in image synthesis. This paper describes a method for calibrating camera and light source simultaneously from photometric and geometric constraints. In general, feature points in a scene are used for computing camera positions and orientations. On the other hand, if the cameras and objects are sticked and move together, the changes in shading information of the objects in images also include useful information on geometric camera motions. In this paper, we show that if we use both shading information and feature point information, we can calibrate cameras from smaller number of feature points than the existing methods. Furthermore, it is shown that the proposed method can calibrate light sources as well as cameras. The accuracy of the proposed method is evaluated by using real and synthetic images.