introduced by (Schröder et al., 2011; Butler et al.,
2012). Finally, our method can also be used with
multiple RGB and RGB-D cameras if needed (Fig.
1-b).
There are other work that use mirrors with RGB
cameras to reconstruct observed scenes (Nene et al.,
1998; Mariottini et al., 2012). They build 3D
structure of a scene using Structure from Motion and
Stereo techniques. However, these methods are
strictly on RGB images and they do not develop any
solutions for the problems of RGB-D data such as
calibration and registration of depth images.
Using mirrors with RGB-D cameras is not a new
idea. There are attempts at using both Kinect and
mirrors but these studies are very informal (Kinect
vs. Mirror, 2010). They do not develop any
algorithms or formulations for the 3D reconstruction
of scenes.
Our main contribution in this paper is enabling
users to obtain a more complete 3D reconstruction
of an object from a single real depth image. Using a
proper configuration of mirrors and a single Kinect,
one can accomplish 3D reconstruction of an object
utilizing proposed method. We develop and test
algorithms for the simultaneous calibration and
registration of real and virtual RGB-D cameras. We
also describe methods for the full 3D reconstruction
of the scenes using the developed calibration
techniques. Although multiple calibration pattern
images with different positions and orientations can
be used to calibrate the proposed system, utilizing a
single image is found sufficient for the calibration
procedure. Furthermore, we used an external high
resolution RGB camera to capture high quality
images for texture mapping of the reconstructed 3D
structure of the object.
The rest of this paper is organized as follows: In
Section 2, we give an overview of our method. In
Section 3, we describe the calibration/registration
procedure between the RGB-D camera and RGB
camera. We then explain RGB-D camera - virtual
RGB-D camera calibration process. In Section 5, we
discuss experimental results of the proposed method.
Finally, we provide concluding remarks in Section 6.
2 METHOD OVERVIEW
The main processes and the data flow of the
proposed system are shown in Fig. 3. Our systems
begins with capturing the RGB-D images of the
scene with test or calibration objects. We then
calibrate the RGB-D camera using standard
calibration procedures. Next, the direct and the
reflected image sections of the RGB-D image are
segmented as real and virtual RGB-D images,
respectively. The calibration and registration of the
real and virtual images is followed by the 3D
reconstruction of the scene.
RGB-D cameras such as Microsoft's Kinect
cannot produce high quality RGB images because of
their low resolution and low quality lenses. In order
to increase the texture quality of the reconstructed
3D scene, we used an external high resolution RGB
camera along with the RGB-D camera (Fig. 1(b)). In
other words, we acquire depth data from the RGB-D
camera and color data from the external RGB
camera. So, we have four cameras in total; two of
them are real, two of them are virtual. Calibrating
these four cameras enables us to reconstruct the 3D
scene with a better texture mapping quality.
3 CALIBRATION
AND REGISTRATION
The first step of our method is constructing a test
area which is surrounded by single or multiple
mirrors. We place a calibration pattern on a location
which is visible from both RGB-D camera and
external RGB camera (Fig. 4 and Fig. 1(a)). Then,
we calibrate intrinsic and extrinsic parameters of the
real RGB-D camera and the real RGB camera using
the method of Zhang (2000). Next, we perform
registration between the RGB-D and the external
RGB camera using a method similar to (Jones et al.,
2011). Finally, the registration between the real and
virtual RGB-D cameras is established. Note that
intrinsic parameters of the virtual RGB-D camera
and the virtual RGB camera are identical with the
real counterparts, which makes the overall
calibration of the system easier compared to
calibration of multiple real cameras. Next two
subsections describe the details of the calibration
and the registration processes.
3.1 RGB-D and RGB Camera
Calibration
In order to compute the transformation between the
real RGB-D camera and the real external RGB
camera (Fig. 1(b)), we used the standard calibration
pattern (Fig. 4). There are total of 48 calibration
corners for a calibration pattern. For a given
calibration corner point C=
X, Y, Z
T
, the RGB-D
camera produces a 3D vector x
k
, y
k
, z
k
T
in the
3DReconstructionwithMirrorsandRGB-DCameras
329