2 RELATED WORK
Multi-sensor architectures are popular nowadays, and
research works on derived topics such as calibra-
tion are also plentiful, especially about visual, iner-
tial and LIDAR sensors. The authors of (Li et al.,
2013) classify extrinsic calibration methods of cam-
era and LIDAR sensors. According to this classifica-
tion, there are three categories of camera/LIDAR cali-
bration: the first method is based on auxiliary sensors;
using a third sensor which is Inertial Measurement
Unit (IMU), extrinsic calibration is carried out. It is
shown that the rigid transformation between the two
frames can be estimated using the IMU (Nez et al.,
2009). The second method is based on specially de-
signed calibration boards. The idea is to use a par-
ticular pattern to determine targets position in both
sensors frames, and subsequently express the target
coordinates in each frame in order to derive the rigid
transformations. In (Fremont et al., 2012), the cali-
bration uses circular targets for intelligent vehicle ap-
plications and dedicated to multi-layer LIDAR. This
method determines the relative pose in rotation and
translation of the sensors using sets of correspond-
ing circular features acquired for several target con-
figurations (Fremont et al., 2012). Similarly, (Park
et al., 2014) uses a polygonal planar board to perform
calibration of color camera and multi-layer Lidar for
robot navigation tasks. Concerning the third category,
it is about methods that use chessboard targets. This
kind of calibration is also pattern specific. The ad-
vantage of this method is determining the intrinsic
parameters simultaneously for cameras and extrinsic
calibration of the camera and the LIDAR (Li et al.,
2013). In addition to these three categories, we con-
sider another category which is the automatic extrin-
sic calibration. This kind of method is handled with-
out a designed calibration board or another sensor as
mentioned above. (John et al., 2015) proposes a cali-
bration approach does not need any particular shape to
be located. Their method consists to integrate the per-
ceived data from 3D LIDAR and stereo camera using
Particle Swarm Optimization algorithms (PSO), using
acquired objects from the outer world, without the aid
of any dedicated external pattern.
In addition, we can distinguish three main meth-
ods to solve the established closed form between the
correspondence features. The first one solves the
closed form using linear methods such as the Singu-
lar Value Decomposition (SVD) and uses this solu-
tion as a first guess to perform a nonlinear optimiza-
tion such as Gauss-Newton or Levenberg-Marquardt
algorithms. The second method is based on the idea
that the determination of the global minimum of a
given cost function needs to find an initial guess lo-
cated in the basin of attraction. The authors of (Guo
and Roumeliotis, 2013) proposed an analytical least-
squares approach to carry out a generic calibration
process. The third method uses stochastic approaches
or search algorithms to associate the features between
two frames as the PSO algorithm, as it is shown in
(John et al., 2015). Based on the literature review,
we believe that the development of a generic solution
for sensor alignment is a viable solution. However in
the literature, little is said on the relationship between
sensor calibration and sensor fusion steps, apart from
automatic calibration approaches. We believe that
such contribution, for target-based solutions (shape or
pattern specific), is useful, especially if computation-
heavy algorithms (such as PSO) are avoided. In ad-
dition, few works address the tool-chain implementa-
tion on real vehicles. We believe that this topic is of
interest for practitioners.
3 PROBLEM FORMULATION
AND ANALYTICAL
LEAST-SQUARES SOLUTION
3.1 Problem Formulation and Basic
Concepts
The calibration process is an alignment procedure of
a given sensor frames. That is to say, find the relation
between the coordinates of sensor frames to ensure
the transformation from a frame into another. Con-
cerning the extrinsic calibration of a LIDAR sensor
and camera, it is about estimation of the relative posi-
tion for a given point located in the real world frame,
in the LIDAR and camera frames. the objective is
to find the unknown 6 Degrees Of Freedom (DOF)
transformation between the two sensor frames. In
other words, the goal is to find the rigid transforma-
tion [
C
R
L
|
C
−→
t
L
], which allows us to determine the cor-
respondence of a given 3D LIDAR point represented
as
−→
p
L
= [x
L
,y
L
,z
L
]
T
located into the frame of the LI-
DAR sensor {L}, in the frame of the camera {C}. Let
−→
p
C
= [x
C
,y
C
,z
C
]
T
be the correspondence of
−→
p
L
:
−→
p
C
=
C
R
L
−→
p
L
+
C
−→
t
L
(1)
Based on (Guo and Roumeliotis, 2013), we extend
the existing calibration solution to multi-line pattern
targets. We define the coordinate system for the sen-
sors as follows: the origin O
C
is the center of the
camera and the origin O
L
is the center of the LIDAR
sensor frame. Without loss of generality, the LIDAR
scanning plane is defined as the plane z
L
= 0 (see
ICINCO 2017 - 14th International Conference on Informatics in Control, Automation and Robotics
506