Figure 6: New global frame of the IMU.
• We need a ”starting point”, a reference 3D position,
from which we can start to integrate the 3D accel-
eration data.
• Noise on the acceleration data and small offset er-
rors and/or incorrectly subtracted acceleration due
to gravity, will be integrated and over time will
cause huge (drift) errors in the position estimate if
used longer than a few seconds without any exter-
nal update of true position.
The conclusion is that the orientation and also the
position determined by this method depends very
much on the type of motion and the environment in
which we are operating. For the position estimation
typically, short duration motions, preferably cyclical,
with known reference positions will work well. We
must also take into account the magnetic perturba-
tions, actually, the orientation measured by the IMU
is affected by the disturbances caused by the ferro-
magnetic objects present in the environment. These
constraints and problems obliged us to choose an-
other method more appropriated for our application
which needs accurate orientation and position mea-
surements.
Method 2 As we already evoked, the robot gives
the position of the tool located at the end of its last
axis with respect to a defined user frame. Knowing
the positioning of the IMU with respect to the robot
tool, we deduce the transformation between the IMU
related frame, {I}, and the user frame, {U}, where
{U} represents the camera world frame.
The IMU rotation with respect to the camera frame
is
R
CI
= R
CW
.R
W T
.R
T I
(3)
where R
CI
and R
CW
are respectively the rotation of
the IMU frame and the world frame with respect to the
camera frame, R
W T
is the rotation of the tool frame
with respect to the world frame and R
T I
is the rota-
tion of the IMU frame with respect to the tool frame.
3.2.2 IMU Translation
For this part, we use also the data provided by the ro-
bot, which are the coordinates of its tool frame, {T },
with respect to its user frame, {U}. A simple read-
ing on the robot control tool, allows to know the three
translation components of the tool with respect to the
user frame.
Nevertheless, we need to determine the translation
of the IMU frame, {I}, with respect to the camera
frame, {C}. Then, it is important to know exactly the
position of the IMU frame with respect to the robot
tool frame, {T }.
Indeed, the coordinates of the IMU frame origin,
O
IM U
, according to the tool frame, represent the
translation of the IMU frame with respect to the tool
frame, we denote it T
T I
. This translation is computed
from reported measurements and manufacturer data.
T
T I
is known, we compute T
W I
, the translation of the
IMU frame with respect to the world frame. We apply
the following formula of coordinate transformation to
determine T
W I
T
W I
= R
W T
· T
T I
+ T
W T
(4)
Finally, the translation to T
CI
is given by
T
CI
= R
CW
· T
W I
+ T
CW
(5)
3.3 Camera Calibration
In this work, we have used a calibration method which
is based on Zhang technique (Zhang, 1998). The cam-
era observes a planar pattern from a few (at least two)
different orientations. We can move either the camera
or the planar pattern, the motion does not need to be
known. The camera intrinsic and extrinsic parameters
are solved using an analytical solution, followed by a
nonlinear optimization technique based on the maxi-
mum likelihood criterion (Zhang, 1998). Radial and
tangential lens distortions are also modeled and very
good results have been obtained compared with clas-
sical techniques which use two or more orthogonal
planes.
From (1) the rotation matrix and the translation
vector are computed during the determination of
the camera parameters in the calibration procedure.
This transformation expresses the orientation and the
translation of the camera frame, {C}, with respect to
the camera world frame, {W }.
4 EXPERIMENTS
4.1 Experimental Setup
The hybrid tracker calibration procedure described in
the previous section was experimented. We have de-
VISION-INERTIAL SYSTEM CALIBRATION FOR TRACKING IN AUGMENTED REALITY
159