feature points from these orthogonal images. The
image for the texture mapping is also obtained from
them. Many postures of the subject were tested
considering the texture mapping and repeatability. A
white or blue background is recommended because
we detect the body curves by skin detection.
An image-capturing device is designed to get the
photos conveniently even though our software
allows any kind of digital images of subjects. Our
image-capturing device, which is connected to a PC
with a USB cable, consists of a digital camera and a
distance-checking sensor. The image captured by the
digital camera is sent directly to our software.
Olympus SDK v3.3 is applied to implement remote-
capturing (http://www.olympus.com/). An ultrasonic
sensor checks the distance from the object and gives
the distance information to our software. We can
roughly calculate the actual size of the subject using
calibration information according to the distance.
The images of several discrete positions of a
calibration panel are taken beforehand. Our system’s
distance range is 1-2m. Tsai’s calibration method is
applied to derive the relations between the image
coordinates and the world coordinates at each
position. Figure 3 shows our image-capturing device
and the calibration panel.
Figure 3: Photos of the image-capturing device (left) and
the calibration panel (right).
3.2 3D Feature Points’ Calculation
from Images
After taking pictures or loading the images of the
subject, we assign the feature points. All of the
points or each point can be scaled and translated
easily with our GUI. Some points are constrained by
the definition in Table 1, and many of them are
automatically positioned. For example, RSP and
LSP have the same y-coordinates as FNP. Their
coordinates in the front-view image are set at the
outmost positions of the body at the FNP’s y-level.
In Section 2.2, the feature points P0, P2 and P4
in the front-view image are assumed to lie on the
same plane in the 3D coordinate system. These
points are also defined in the side-view. Therefore,
x- and y-coordinates are assigned in the front-view
image and z- and y-coordinates are decided in the
left-view image. Internally, the side-view images are
scaled and normalized according to the front-view.
With these relationships, the coordinates of all the
feature points are derived. Their x-coordinates are
obtained from the front image, and the z-coordinates
are computed from the side image. The y-
coordinates are common values in the front and side
images. The amount of proper deformation of the
template model is obtained by the process of
matching the corresponding feature points in the
template to those in the images. Sections 3.3 and 3.4
describe the algorithms in detail.
WP1~WP6 are detected automatically using skin
detection. We used YCbCr colour space. We need to
handle only Cb and Cr values because we are
interested in the illumination intensity invariant part
of the colour for the skin detection. Eq. (1) below
shows our skin detection criteria:
(1)
where
= 77, = 132
= 133, = 171
In the skin detection process, we have many
small holes in the interior of the body. These holes
are obstacles to finding the body’s curves, so we fill
the holes if they are smaller than a threshold size
proportional to the image size. Figure 4 shows the
skin detection result and the feature points.
Figure 4: Skin detection result and feature points on 2D
front-view (left) and left-view (right) images. Some of the
points are positioned manually and the others are set
automatically.
3.3 Global Deformation
In the global deformation step, we compute the
affine transformation matrix to match the template
model as closely as possible to the feature points
calculated from the images. The affine
transformation includes translation, rotation and
scaling. To minimize the sum of the squared error
between the calculated feature points and the
corresponding feature points of the transformed
model, we use Procrustes Analysis (J.C. Gower and
G.B. Dijksterhuis, 2004). This method is
computationally simple and stable. This global
deformation is an auxiliary step because it gives
min
Cb
max
Cb
min
Cr
max
Cr
}&|),({
maxminmaxmin
CrCrCrCbCbCbyxp <<<
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
94