In this vehicle, in order to percept under complex
environment like urban road, some sensors are
equipped. In front of the vehicle, a multi-layered
LIDAR (SICK LD-MRS) and millimeter-
wavelength RADAR (Fujitsu-ten) are installed.
These sensors are used to detect distant obstacles.
Moreover, three LIDARs are equipped on roof of the
vehicle. The one of three LIDAR is a high definition
LIDAR (Velodyne HDL64E-S2), and this is used to
detect middle range obstacles. In this LIDAR, 64
laser transmitters are embedded, by rotating 360
degrees around vertical axis, three-dimensional
position of all direction can be given at 10 times per
second. From the LIDAR, since dense three-
dimensional information of 1.3 million point per
second can be given, three-dimensional position of
almost all vehicles, sidewalls, road surface, and so
on can be given up to about 60 meters as shown in
figure 2(a). The rest of two LIDARs(SICK LMS
291-S05) are used to detect the lane markers. They
are positioned in each side of vehicle looking at the
road near the vehicle.
(a) Raw laser point cloud
(b) Example of typical scene percepted by Environment
Perceptor.
Figure 2: Overview of Environment Perceptor.
In addition, to obtain precise vehicle trajectory
for always, GNSS/INS (Applanix POS LV220) is
used. This system consists of Distance Measurement
Unit (DMI), Inertial Measurement Unit (IMU), and
two GNSS receivers, which includes a
GPS/GLONASS azimuth heading measurement
subsystem, and thereby it is possible to obtain
vehicle position at 100Hz by Kalman filtering for
tightly coupled GNSS/INS integration. Therefore, it
is possible to measure vehicle position with accuracy
of 3cm in position, 0.05 degree in attitude when
GPS/GLONASSS signal can be observed, and RTK
correction signal can be given. Additionally, this
system has an accuracy of less than 0.7m after 1km
or 1minute of travel without GPS/GLONASS
signals.
In this vehicle, actuators are installed on steering,
gas pedal, brake, shift and parking brake to realize
autonomous driving. Moreover, to realize natural
driving, horn switch, turn signal and hazard switch
can also be controlled by computer. These actuators
are controlled via motor controller EPOS2
manufactured by MAXSON motor, real-time control
can be achieved via CAN-bus network.
3 SYSTEM OVERVIEW
Figure 3 shows a system overview of our
autonomous vehicle. The software architecture used
in our vehicle is designed as a data driven pipeline,
which individual modules process information
asynchronously. Each module communicates with
other modules via an anonymous publish/subscribe
message passing protocol.
As shown in figure 3, our autonomous vehicle
system is roughly composed of three modules of
Perception, Path planning and Controller modules.
In perception modules, there are three modules of
Lane marker detection, Map matching and
Environment perception. The lane marker module
extracts lane marker on both side of the vehicle by
using side looking LIDARs. From these LIDARs,
distance and reflectivity can be obtained, and lane
marker positions, curvature of the lane marker are
estimated from these measurement. The map
matching module refine pose estimate given from
the GNSS/INS system (Suganuma, 2011), since
typically GNSS/INS system has significant drift
error under urban environment. The rest of
perception module is Environment Perception,
which is main process of these modules and is used
to generate important information to safety driving.
The Environment Perception extracts static obstacles
and estimate motion of dynamic object around the
Ego-vehicle trajectory
Dynamic object and
predicted trajectory
Drivable area (OGM)
Static obstacle (OGM)
Ego-vehicle
Pedestrian
Vehicle
Bicycle
Curb stone
Occlusion
ICINCO2014-11thInternationalConferenceonInformaticsinControl,AutomationandRobotics
546