pose at the same time, distortion in the lidar-scan data
occurs (Inui et al., 2017).
In addition, when the lidar is mounted on a two-
wheeled vehicle, the mapping accuracy deteriorates
due to the large swing motion of the lidar. As a result,
undetection and false detection of moving objects
increase. In order to address this problem, in this
paper, we estimate the vehicle’s pose every shorter
period than lidar-scan period, and then using the pose
estimates, we correct the distortion in lidar-scan data.
Furthermore, the differences (subtracted scan
data) are extracted between the 3D-point cloud
environment map acquired in advance and the current
lidar-scan data, and only the subtracted scan data is
mapped onto the grid map. Thereafter, moving-object
detection and tracking are performed by the
occupancy grid method and Baysian filter.
The rest of this paper is organized as follows. In
Section 2, an overview of the experimental system is
given. In Section 3, the methods of distortion
correction and map subtraction are described. In
Section 4, method of detecting and tracking moving
objects is described. In Section 5, experimental
results are presented, followed by conclusions in
Section 6.
2 EXPERIMENTAL SYSTEM
Figure 1 shows the overview of two-wheeled vehicle.
As a first step of the study, we use a bicycle (Yamaha
PAS-GEAR-U) as a two-wheeled vehicle. On the
upper part of the bicycle, a 32-layer lidar (Velodyne
HDL-32E) is mounted, and a global navigation
satellite system/inertial navigation system (GNSS/
INS) unit (Novatel PwrPak7-E1) is mounted on the
rear part.
The maximum range of the lidar is 70 m, the
horizontal viewing angle is 360° with a resolution of
0.16°, and the vertical viewing angle is 41.34° with a
resolution of 1.33°. The lidar acquires 384
measurements including the object’s position every
0.55 ms (at 2° horizontal angle increments). The
period for the lidar beam to complete one rotation
(360°) in the horizontal direction is 100 ms, and about
70,000 measurements are acquired in one rotation. In
this paper, one rotation in the horizontal direction of
the lidar beam is referred to as one scan, and the data
including measurements acquired by the one scan is
referred to as the lidar-scan data.
The GNSS/INS unit outputs the 3D position and
attitude angle (roll, pitch and yaw angles) every 100
ms. The horizontal and vertical position errors (RMS)
are 0.02 m and 0.03 m, respectively. The roll and
pitch angle errors (RMS) are 0.02°, and the yaw angle
error (RMS) is 0.06°.
Figure 1: Experimental bicycle.
3 SUBTRACTION OF SCAN
DATA
3.1 Distortion Correction
The lidar-scan data are obtained in the sensor
coordinate frame Σ
S
fixed on the lidar, and they are
mapped on the world coordinate frame Σ
W
using the
bicycle’s pose. The output of the GNSS/INS unit can
be used as the bicycle’s pose in GNSS environments.
The observation period of the GNSS/INS unit is
100 ms at which the lidar makes one rotation, and
scan data every 0.55 ms are captured 180 times within
one rotation of the lidar. Therefore, the bicycle's pose
is estimated every 0.55 ms by interpolating the
bicycle's pose from the GNSS/INS unit every 100 ms.
For the i-th (i = 1, 2, ...) measurement in the scan
data, we define the position vector in Σ
S
as p
i
= (x
i
,
y
i
, z
i
)
T
and that in Σ
W
as p
i
'
= (x
i
'
,y
i
'
,z
i
'
)
T
. p
i
can be
converted to p
i
'
by
1
)(
1
'
ii
p
X
p
(1)
where
T
zyx ),,,,,(
X
is the bicycle’s pose.
(x, y, z)
T
and
T
),,(
are the position and attitude
angle (roll, pitch, and yaw angles) of the bicycle,
respectively, in
Σ
W
. T(X) is the following
homogeneous transformation matrix:
cos cos sin sin cos cos sin cos sin cos sin sin
cos sin sin sin sin cos cos cos sin sin sin cos
()
sin sin cos cos cos
00 01