Figure 2: The circular foot scanner consists of an upper 3D
scan sensor and a lower 3D scan sensor.
an upper sensor scans the upper part of the foot. The
two scan results are then combined to obtain the entire
3D shape of the foot.
In a previous paper of this study (Lee et al., 2017),
we developed a three-dimensional circular foot
scanner with upper and lower 3D sensors to obtain the
whole shape of the foot, as shown in Figure 1. For
scanning the whole shape of the foot, the upper sensor
moves along a circular stage and the lower sensor
moves along a linear stage. To obtain the correct
shape of the whole foot, it is necessary to represent
the data acquired from each view point as a single
view point. In order to represent the data obtained
from different viewpoints as a single view point, it is
necessary to know a 3D transformation relation
between the two sensors.
In Figure 2, K
1
and K
2
represent the upper and the
lower 3D scanning sensor coordinate systems. The
3D transformation relationship between the two
sensor systems is expressed as T
12
. T
12
is represented
by a 3x3 rotation matrix R and a translation vector t.
In general, for two different cameras, the
transformation matrix between two viewpoints can be
obtained by the Zhang algorithm (Zhang, 2000)
which uses a checkerboard. In our previous research,
we also used a checkerboard pattern to find the
transformation relationship between the upper and
lower 3D sensors.
However, the Zhang algorithm is difficult to use
for calibration between two sensors when the angle
between two sensors’s viewing direction is large.
Unfortunately, because most foot scanners use
multiple 3D sensors to obtain soles and foot shapes,
the viewing angles between the sensors are often
more than 90 degrees. In this reason, some researches
have been done to calibrate 3D sensors with large
between large viewing angle by using special
calibration objects.
(Barone et. al., 2013) calibrates two sensors by
placing printed markers on an object. Using markers
has an advantage of being able to know the direction
of the object and clearly identify the feature points for
correction, but placing printed markers on the object
is an inconvenient method. (Mitchelson and Hilton,
2003) use a specially designed LED pattern to
calibrate two sensors. Another calibration method is
using three or more checkerboard pattern attached
perpendicular each other on a cube box. But it is not
easy to make two or more sensors see the three
checkerboard pattern simultaneously.
When a 3D scanning system consists of multiple
3D sensors with large viewing directions, it is
difficult to calibrate the sensor coordinate systems. In
the previous researches, only specially designed
calibration objects and algorithm are used. In
addition, because the 3D scanning systems of the
previous researches have different structure than our
scanning system, it is not easy to employ the previous
methods.
In this paper, we propose a very simple and easy
calibration method for the calibration of two 3D
sensor systems. Especially the proposed method is
applied for the calibration of our 3D foot scanning
system. We evaluate the performance of the proposed
method by obtaining the 3D model of the human foot
using the transformation between the two 3D sensors.
2 CALIBRATION METHOD
USING A PAPER OBJECT
The authors of this paper have reflected on how to
calibrate between two sensors that are more than 90
degree easily and simply. And making of the
calibration object was also focused on using materials
that are readily available from around, so that anyone
can calibrate easily the two sensors. Therefore, the
criteria of the proposed calibration object are as
follows:
1. The material of the calibration object should be
readily available from around.
2. The making of the calibration object should be
easy enough for everyone.
3. Algorithms for calibrating between the two
sensors using a calibration object should also be
simple.
Calibration of Two 3D Sensors with Perpendicular Scanning Directions by using a Piece of Paper
163