2.1 Calibration
Depth camera can increase a channel to obtain the
scene information for the computer. It is possible to
build up the real-time three-dimensional scene
through the real-time depth data of depth camera.
However, in order to reconstruct a three-dimensional
coordinate through the measured data of camera, the
data in depth picture must to be aligned to the color
pixels. And the process of alignment depends on the
results of camera calibration. Because we can obtain
the parameters through camera calibration which are
necessary for alignment. Calibration includes the
respective internal parameters and the external
parameters between color and depth cameras. Color
camera has been extensively studied. For depth
camera, the existing research cannot meet the balance
of accuracy and speed. And the results are easy to be
influenced by the noise of depth data. In this case,
we studied the joint calibration of color and depth
cameras.
(a) (b)
Figure 1: Calibration of camera: (a) color camera and (b)
depth camera.
To achieve the joint calibration of the color and
depth cameras, we take the pictures of the
checkerboard at different perspectives with the
cameras to be calibrated, and calculate the matrix of
the inner parameters of the camera and its outer
parameters related to each image with the camera
calibration interface provided by OpenCv library.
Kinect depth camera uses an infrared speckle
transmitter to emit infrared beam. Then when the light
beam irradiates to the surface and reflects back to the
depth camera, the depth camera will calculate the
depth of the object through the geometric relationship
between returning bulk spots. Figure 1 shows the
calibration images of color camera and depth camera.
And the right picture is the infrared figure
corresponding to the color image. We can
respectively calculate the internal parameters of the
depth camera and color camera from Fig.1. Here, we
use the interface provided by OpenCv to obtain the
camera parameters. The distortion parameters of the
color camera is [0.025163 -0.118850-0.006536-
0.001345] and that of the depth camera is [-0.094718
0.284224 -0.005630-0.001429]. Then the internal
parameters of the color camera and depth camera are:
E
=
554.952628 0.000000 327.545377
0.000000 555.959694 248.218614
0.000000 0.000000 1.000000
=
597.599759 0.000000 322.978715
0.000000 597.651554 239.635289
0.000000 0.000000 1.000000
There are few points to be noted during the
calibration. First, the calibration board should be as
large as possible, at least to reach the size of A3 paper.
Second, the angle between the board plane and the
camera bead plane can’t be too large, which should
be controlled below 45 degrees. Third, the tilts and
positions of the board need to be as diverse as
possible, because those boards parallel to each other
have no help to the calibration results. Fourth, there
should be at least ten images used to calibrate, which
can help to improve the accuracy. Fifth, the resolution
of the camera should be properly set, and the aspect
ratio is preferably the same as the depth map.
2.2 Projection Transformation Matrix
After the calibration of color camera, we need to
obtain the projection transformation matrix, depth
camera’s internal parameters and outside parameters
related to color camera.
In our calibration system, color camera is fixed on
depth camera and they remain parallel. So we only
need to do some certain translation transformation
to project the depth data into the coordinate system of
color camera. Then we need project the depth data
into color image to form the final depth buffer.
During the process, we must note that due to the
different resolution of those two cameras, the depth
buffer data and color data cannot be fully
realized alignment in the strict sense, but we
only need part of the depth data to verify, therefore,
depth image need not be enhanced.
Figure 2: The transformation from depth data to 3D
coordinates.
On the corresponding area of the depth
checkerboard image, we randomly calibrate a block