Three-Dimensional Visual Reconstruction of Path Shape
Using a Cart with a Laser Scanner
Kikuhito Kawasue
1
, Ryunosuke Futami
1
and Hajime Kobayashi
2
1
Faculty of Engineering, University of Miyazaki, 1-1 Gakuen Kibanadai Nishi, Miyazaki, Japan
2
Mog Consultant, 20-20 Miyamae Kishiwada, Osaka, Japan
Keywords: Measurement System, Computer Vision, Three-dimensional, Laser Scanner, Path, Surface, Calibration,
Pattern Matching.
Abstract: A movable three-dimensional measurement system of the shape of a path (road) surface has been
developed. The measurement can be taken by rolling the proposed measurement cart along the path. The
measurement system is composed of a laser scanner, CCD camera, omni-directional camera and a computer.
The laser scanner measures the cross-sectional shape of the path at a rate of 40 Hz. The direction of the
CCD view is downward to observe the texture of the path surface. The relative movement of the
measurement cart to the path is detected by analysing the optical flow of the texture movement. Cross-
sectional shapes of the path are accumulated, and the three-dimensional path shape is reconstructed on the
basis of the movement of the measurement cart. The image data recorded by the omni-directional camera
are allocated to the three-dimensional shape data, and the three-dimensional path is visualized in color on
the computer. The reconstructed path data can be used for repair and design of a path in the field of civil
engineering. The experimental results show the feasibility of our system.
1 INTRODUCTION
In recent years, measurement systems that measure
the shape of large structures, such as bridges, tunnels
or roads, have been developed for the maintenance
of such structures. For example, mobile mapping
systems (MMSs) have been utilized to measure the
shape of large structures (Gandolfi 2008, EI 2002,
Murai 2001, Haala 1995). In a typical MMS, an
automobile equipped with a laser scanner runs on a
road and the 3D reconstruction of the road is
established by accumulating the shape data from the
laser scanner. The position of the automobile is
detected by GPS. However, the position cannot be
detected inside a structure or on a small path
between buildings since the GPS signal from a
satellite cannot be obtained. MMS also has problems
in accuracy, cost, system size, etc. Therefore,
development of a new MMS without GPS is
desirable.
In this study, we developed a movable three-
dimensional measurement system for a path (road)
surface. The measurement system is composed of
the following: laser scanner, CCD camera, omni-
directional camera, computer, and cart. The laser
scanner measures the cross-sectional shape of the
path at a rate of 40 Hz. The direction of the CCD
view is downward to observe the texture of the path
surface. The relative movement of the measurement
cart to the path is detected by analysing the optical
flow of the texture movement (Bigun 1987, 1990).
The cross-sectional shapes of the path perpendicular
to the moving direction are accumulated, and the
three-dimensional path condition is reconstructed on
the computer based on the movement of the
measurement cart. The proposed localization method
of the system enables us to measure a target space
with high accuracy without GPS. Since the
measurement cart is small, it can be used in a path
too narrow for an automobile to pass. The image
data recorded by the omni-directional camera are
allocated to the three-dimensional shape data, and
the three-dimensional path is visualized in color on
the computer. In addition, the system enables us to
measure a path having tilt or torsion by adopting a
tilt sensor in the system. The reconstructed path data
can be used for repair and design of a path in the
field of civil engineering. The experimental results
show the feasibility of our system.
600
Kawasue K., Futami R. and Kobayashi H..
Three-Dimensional Visual Reconstruction of Path Shape Using a Cart with a Laser Scanner.
DOI: 10.5220/0004722206000604
In Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISAPP-2014), pages 600-604
ISBN: 978-989-758-009-3
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
2 MEASUREMENT SYSTEM
2.1 System Setup
Figure 1 shows a photograph of the measurement cart. The
cart is composed of a computer, a laser scanner (UTM-
30LX:Hokuyo Automatic Co. Ltd.), an omni-directional
camera with a tilt sensor, and a CCD camera to detect the
movement of the system. The target space is measured
three-dimensionally by rolling the cart on the path. The
scanning (projection) direction of the laser is
perpendicular to the moving direction of the cart. The
view of the omni-directional camera is aligned to include
the laser path. Another CCD camera is attached at the
lower position of the cart. The direction of the CCD
camera view is downward so that the CCD camera
observes the texture of the path surface.
Figure 1: System setup.
2.2 Measurement Procedure of the
System
The measurement (reconstruction) is established by
arranging the cross-sectional shapes of the path
perpendicular to the moving direction of the cart.
The arrangement of the cross sections is executed by
considering the direction and the magnitude of the
displacement of the cart. The direction and the
magnitude are detected by the CCD attached at the
lower position of the cart. The optical flow of the
texture movement of the path surface is analysed to
detect the movement. By using this method, the
movement of the cart can be detected accurately
regardless of the path surface condition.
The laser scanner projects and scans the beam in
a radial direction and the cross section of the space
is measured at a rate of 40 Hz. Figure 2 shows the
projection image of the laser scanner.
Figure 2: Projection image of the laser.
(Direction of scanning is perpendicular to the
moving direction of the cart)
3 IMAGE PROCESSING FOR
THE MEASUREMENT
3.1 Detection of Movement of the Cart
The direction and the magnitude of the displacement
of the cart are detected by the CCD attached at the
lower position of the cart. The image correlation
method is used to estimate the optical flow. Small
interrogation regions are selected in the images
recorded by the CCD. An example of the
interrogation regions is shown by the rectangular
regions in Figure 3. The intensity of pixels is used to
find the matching interrogation regions in
consecutive different images. The following
equation is used as the correlation function.
11
00
11 11
22
00 00
(,)(,)
(,)
(,) (,)
Ht Wt
ij
Ht Wt Ht Wt
ij ij
Ia ub v Ouv
Rab
Ia ub v Ouv


 
 



 
(1)
H
t
and W
i
are sizes of the interrogation region. I is
the intensity of pixels in the input image and O is the
intensity of pixels at (u, v) in the original
interrogation region. R is a correlation value and (a,
b), which has the smallest R, is the movement of the
cart in the interval. The vectors in Figure 3 represent
the optical flow vectors in the interval. The direction
and the displacement of the cart are estimated by the
optical flow vectors.
3.2 Calibration
Since the coordinate system of optical flow obtained
in 3.1 is based on the camera coordinate system, the
coordinate system has to be transformed to the
Laser scanne
r
and omni directional
camera
Tilt senso
r
CCD camera for detecting
the movement of the cart
Three-DimensionalVisualReconstructionofPathShapeUsingaCartwithaLaserScanner
601
Figure 3: Detection of the movement of the cart by using
optical flow.
global coordinate system for quantitative
measurement. The relation between the camera
coordinate system (u, v) and the global coordinate
system (x, y, z) is as follows. In (2), h
i,j
is the
transformation matrix (Wei 1993, 1994).
1
11
333231
24232221
14131211
z
y
x
hhh
hhhh
hhhh
v
u
s
(2)
This equation is transformed as follows.
11 12 13 14 31 32 33
21 22 23 24 31 32 33
h x h y h z h h ux h uy h uz u
hx h y hz h hvx hvy hvz v


(3)
A scale board is set as shown in Figure 4 and the
CCD camera records the scale on the board. The
laser scanner is set on the upper position of the cart
and detects the z position of the scale board. At least
six non-coplanar points are selected by changing the
z-position of the scale board. The camera coordinate
(u, v) is read by using the mouse device on the
computer. The global coordinate (x, y) is read by the
scale on the board and z is detected by the laser
scanner. The pairs of six coordinates between
camera coordinate system (u, v) and global
coordinate system (x, y, z) are assigned in (3), and
the parameters (h
i
,
j
) of the transformation matrix are
determined.
Therefore, the conversion matrix from the
camera coordinate system (u, v) to the global
coordinate system (x, y) is as follows.
1
11 31 12 32 14 33 13
21 31 22 32 24 33 23
()
()
hhuhhu uh huhz
x
hhvhhv vh hvhz
y








(4)
where z is detected by the laser scanner.
Figure 4: Calibration setup.
3.3 Allocation of Color Information to
Shape Data
The omni-directional camera is attached near the
laser scanner, and the view of the camera is aligned
to include the scan area of the laser scanner (Park
2013). Three-dimensional point cloud data of the
cross section of the path are obtained by the laser
scanner and the color information obtained by the
omni-directional camera is allocated to each point
cloud data. The arrangement of the omni-directional
camera and the laser scanner is shown in Figure 5.
This figure is a view from the moving direction of
the cart. In this figure,
)(
321
PPP
is calculated by
the inner product formula as follows.
180
)(
)(
cos(deg)
22
1
Hyx
Hy
(5)
As the laser is scanned from the first quadrant to
the fourth quadrant, the value of θ is modified for
each quadrant area.
deg450
deg90
deg90
deg90
quadrantFourth
quadrantThird
quadrantSecond
quadrantFirst
(6)
The color information is allocated to each point
data by (5) and (6).
4 SLOPE DETECTION
In our system, the tilt of the cart is detected by the
tilt sensor. This sensor detects the direction of the
gravitational acceleration (Gx, Gy). The range of the
value is -1000 to 1000 mG. The slope of the cart is
x
y
z
CCD
z
=z
o,
z
1
,z
2
..
Scale boar
d
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
602
Figure 5: Relationship between omni-directional camera
and laser scanner.
defined as follows.
180
1000
sin(deg)
180
1000
sin(deg)
1
1
x
y
y
x
G
G
(7)
As the data obtained by tilt sensor are not stable,
as shown in Figure 6, 300–500 data from the tilt
sensor are selected and mean data are calculated.
The tilt data from the sensor are smoothed, as shown
in Figure 6.
Sampling No.
Figure 6: Result of the smoothing process.
5 EXPERIMENT
5.1 Detection Accuracy of the Cart
Movement
The cross section of the path is arranged considering
the movement of the cart. The accuracy of the
reconstruction is dependent on accurate detection of
the movement. The measurement accuracy of the
cart displacement is evaluated on the basis of the
cart speed. The average speed of the cart is
estimated from images of the CCD camera. The line
scale shown in Figure 7 is used to measure the actual
displacement of the cart. The cart was rolled along
the line scale and the result of the computation was
compared with the actual displacement. The
experimental result is shown in Figure 8.
Figure 7: Line scale used for the experiment.
Figure 8: Accuracy of the movement detection of the cart.
The result shows that the percentage of
measurement error is relatively large under 0.1 m/s
and over 0.3 m/s. This error occurs for the following
reasons.
Under 0.1 m/s: The difference between two
images is too small to detect the optical vector under
the digital image. For quantization, the optical flow
vector cannot be calculated under movement of 1
pixel.
Over 0.3 m/s: The difference is too big between
two images for estimating the correlation value. So,
the error is caused by the corresponding error.
Therefore, the best performance is realized at
approximately 0.2 m/s.
The error of the laser scanner used in our system
is 10–30 mm and the reconstruction accuracy is
dependent on the laser scanner accuracy.
5.2 Reconstruction of the Path on the
Computer
The path in our university campus is measured to
),0(
2
HP
),(
3
yxP
H
P
1
(0,0)
Original
Smoothed data
Angle [deg.]
Omni directional camera
Laser scanne
r
Three-DimensionalVisualReconstructionofPathShapeUsingaCartwithaLaserScanner
603
check the feasibility of the system. A photograph of
the path is shown in Figure 9. The cart was rolled on
this path. The reconstructed path is shown in Figure
10. The distance of the measured data is
approximately 10 m and the number of measured
points is approximately 1 million. Once the data are
set in the computer, the path can be seen from any
view.
Figure 9: Photograph of the path in the campus.
Figure 10: Reconstructed path on the computer.
6 CONCLUSIONS
In this study, a movable three-dimensional
measurement system without GPS was developed.
The system is composed of a cart with a laser
scanner, omni-directional camera, and CCD camera.
The laser scanner detects the cross section of the
path. The detected cross section is arranged on the
computer on the basis of the movement of the cart.
The movement of the cart is calculated by the optical
flow vector of the texture image on the path surface.
Since it does not use GPS, the proposed
measurement system can be used indoors. Therefore,
applications to various situations in civil engineering
are expected.
REFERENCES
Gandolfi, S., Barbarella, M., Ronci, E., Burchi, A., 2008.
Close photogrammetry and laser scanning using a
mobile mapping system for the high detailed survey of
a height density urban area., ISPR08 , B5: 909–914.
El Hakim, S., Beraldin, J., Picard, M., Vettore, A.,2003.
Effective 3d modeling of heritage sites. In: 3DIM03. ,
302–309.
El Hakim, S., Beraldin, J., Lapointe, J., 2002. Towards
automatic modeling of monuments and towers. In:
3DPVT02. , 526–531.
Murai, S., 2001. Generation of 3d city models in japan: An
overview., ASCONA01, 59–64.
Haala, N., Hahn, M., 1995. Data fusion for the detection
and reconstruction of buildings. ASCONA95, 211–220.
Bigun, J., Granlund, G., Wiklund, J., 1991.
Multidimensional orientation estimation with
applications to texture analysis and optical flow. PAMI
13 , 775–790.
Bigun, J., Granlund, G.,1987. Optimal orientation
detection of linear symmetry. ,ICCV87, 433–438.
Bigun, J. A structure feature for image processing
applications based on spiral functions. CVGIP 51
,1990, 166–194.
Wei, G., Ma, S., 1994. Implicit and explicit camera
calibration: Theory and experiments. PAMI 16 , 469–
480.
Wei, G., Ma, S., 1993. A complete two-plane camera
calibration method and experimental comparisons.,
ICCV93, 439–446.
Park, S., Chung, M., 2013. 3d world modeling using 3d
laser scanner and omni-direction camera., FCV13,
285–288.
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
604