interreflection is estimated based on the
reconstructed shape and reflectance of the surface,
and in next iteration, this estimated interreflection is
compensated to compute the shape and reflectance.
In their follow-up work (Nayar, 1992), the authors
extended the algorithm to deal with colored and
multi-colored surfaces. Due to the fact that the
reflectance of a color surface point is dependent on
the incident light spectrum, the algorithm is applied
to the three chanels of color images independently.
However, their method can not handle occlusions.
The occluded part, which can add the interreflection
to the scene, is not considered at the time of
estimating the initial guess of the shape and
reflectance. A bad initial guess of the object shape
and reflectance could lead to the failure of
convergence of the algorithm.
Nayar et al. (Nayar, 2006) introduced a method
of separating the direct light from global light for
complex scenes using high frequency patterns. The
approach does not require the material properties of
the objects in the scene to be known. The key idea of
this method is to hide a small region of the scene
from illuminated pattern while keep other parts
illuminated. The intensity of the hided region is only
from global illumination. When all the points in the
scene hide once from illuminated light, they form a
global illumination map, from here the direct
illumination map is obtained by subtracting the
image, in which all regions are illuminated, with the
global illumination map. Even though the proposed
method can reduce the number of required images,
this number is still large (25 images are needed for
experiments), and the patterns should have
frequency high enough to sample the global
components.
Seitz et al. (Seitz, 2005) proposed to use inverse
light transport operator to separate m-bounced light
from a scene of arbitrary BRDF. The inverse light
transport operator is estimated under the assumption
of Lambertian surface.
In this paper, we proposed a new method to
eliminate the interreflected light components in 3D
reconstruction using HOC pattern (Lee, 2005), and
then the direct light component is used to reconstruct
the 3D point cloud of the scene. First, the one sided
edges of both direct and reflected light are estimated
in every captured pattern image. Second, the bottom
up approach is used to check and eliminate the
interreflected boundaries from layer 3 to layer 1.
Finally, the direct boundaries are used to reconstruct
3D point cloud of the scene.
The remainder of the paper is organized as
follows: In Section II, we describe our proposed
method to eliminate the reflected light component.
The experimental results are provided in Section III.
Finally, Section IV concludes the paper.
Figure 2: The appearance of interreflection together with
direct light. Where “1” is direct light region, “0” is the
surface region with no illuminated light, and “R” is the
reflected light.
2 THE PROPOSED METHOD
2.1 The Appearance of Interreflection
Together with Direct Light
When projector illuminating the pattern on the
scene, depending on the surface reflectance and
geometry, the interreflection might occur and appear
together with illuminated pattern in different ways,
as shown in Figure 2. The cases that can happen are:
The interreflection appears on the region without
illuminated pattern (Figure 2. a)
The interreflection appears within the region of
illuminated pattern (Figure 2. b)
The interreflection appears on the boundary of the
illuminated pattern (Figure 2. c-d)
The interreflection when illuminating the pattern
“1-0-1” appears edgewise on the region without
illuminated pattern (Figure 2. e), where “1” means
white pattern, and “0” means black pattern.
However, in reality the reflected light stripe
power can be weakened and the interreflection
can be unnoticeable depending the distance from
AMethodofEliminatingInterreflectionin3DReconstructionusingStructuredLight3DCamera
641