LINEAR IMAGE REPRESENTATION UNDER CLOSE LIGHTING
FOR SHAPE RECONSTRUCTION
Yoshiyasu Fujita, Fumihiko Sakaue and Jun Sato
Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Japan
Keywords:
Shape from shading, Near light source, Linear representation, 3D shape recovery.
Abstract:
In this paper, we propose a method for representing intensity images of objects illuminated by near point light
sources. Our image representation model is a linear model, and thus, the 3D shape of objects can be recovered
linearly from intensity images taken from near point light sources. Since our method does not require the
integration of surface normals to recover 3D shapes, the 3D shapes can be recovered, even if they are not
smooth unlike the standard shape from shading methods. The experimental results support the efficinecy of
the proposed method.
1 INTRODUCTION
In recent years, the photometric properties of cam-
era image have been studied extensively for recon-
structing 3D shape of objects and for generating pho-
torealistic CG images (Shashua, 1997; Hayakawa,
1994; Mukaigawa et al., 2006; Iwahori, 1990; Kim
and Burger, 1991; Sato et al., 2006; Okabe and Sato,
2006). It has been shown by Shashua (Shashua, 1997)
that if we assume point light source located at infinity
and if there is no specular reflection, we can generate
arbitrary images from the linear combination of three
basis images taken by three different light sources.
Mukaigawa et al. (Y.Mukaigawa et al., 2001) pro-
posed a method called image linearization which en-
ables us to generate arbitrary images from three ba-
sis images, even if specular reflection and/or shadows
exist in images. The photometric properties of each
image point, such as specular reflection, diffuse re-
flection and shadow, can also be classified by using
the image linearization (Mukaigawa et al., 2006).
On the other hand, many method have been pro-
posed for reconstructing 3D shape of objects from im-
age intensities. In general, three or more than three
images are enough for recovering the surface normal
at each image point, and the 3D shape of an object can
be recovered by integrating the surface normal, if the
3D shape is differentiable (Hayakawa, 1994).
Unfortunately, these methods assume that the
point light sources are located at infinity, and they
cannot be applied if the point light sources are close
to the object, i.e. near point light sources. This is
because the images generated by near light sources
include non-linear components, and they cannot be
represented linearly. However, images generated by
near point light source include much more informa-
tion on the 3D geometry than those generated by infi-
nite point light sources, and thus their analysis is very
important.
Iwahori et al. (Iwahori, 1990) proposed a method
for computing surface normal and depth of a Lamber-
tian surface illuminated by a known near light source.
This method solves non-linear equations assuming
that the point light source exists in the direction of
surface normal at a point where the image intensity
is maximum. Kim (Kim and Burger, 1991) analyzed
the uniqueness of the solution to the non-linear equa-
tions. Although these methods enable us to recover
less ambiguous shape information, they require large
computational cost and may not provide us optimal
solutions.
For avoiding these problems, Sato et al. (Sato
et al., 2006; Okabe and Sato, 2006) proposed a
method for linearizing images with near light sources
by dividing the images into small sub-images and as-
suming parallel light in these sub-images. However,
the computational costs of these methods are also
large, since they require iterative algorithm. Further-
more, the accuracy of recovered geometry is not so
good, since only the local constraints are used in each
sub-image.
In this paper, we propose a method for linearly
representing images with near light source. We show
that linear representation of a near point light source
67
Fujita Y., Sakaue F. and Sato J. (2009).
LINEAR IMAGE REPRESENTATION UNDER CLOSE LIGHTING FOR SHAPE RECONSTRUCTION.
In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, pages 67-72
DOI: 10.5220/0001797000670072
Copyright
c
SciTePress