discrete and noise
discrete and noise
discrete and noise
discrete and noise
[CurrentFrame]
[CurrentFrame]
csfcn camera
CameraPose
simulate Frame
HTM to XYZOAT
X
Y
Z
O
A
T
[p1]
[p1]
[p2]
[p2]
[p3]
[p3]
[p4]
[p4]
[u]
[u]
csfcn
controller
msfcn
frame htm
Figure 2: The Simulink-Model.
The hardware available for this task consisted of a
Kawasaki industrial robot with a camera mounted on
its endeffector and vision-sensor network for captur-
ing and processing images from the camera.
The controller was implemented as a C++ class, in
order to guarantee the future reusablility of the code
in different scenarios (as it will be better explained be-
low). Initially, the controller requires the camera in-
trinsic parameters and the desired image coordinates
of the four feature points. Next it extracts the cur-
rent image coordinates and then it calculates the lin-
ear and angular velocities of the endeffector, i. e. the
input variables of the controller.
In order to safely test the controller, the real pair
robot/camera was replaced by two different simula-
tors. The first simulator, which was implemented in
MATLABSIMULINK, simulates an arbitrary motion of
the camera in space. The camera is represented by a
coordinate frame as it is briefly described later. With
this simulator it is possible to move the camera ac-
cording to the exact given velocities.
In all testing scenarios performed, the camera sim-
ulator needed to output realistic image coordinates of
the feature points. To achieve that, the simulator re-
lied on a very accurate calibration procedure (Hirsh
et al., 2001) as well as exact coordinates of the feature
points in space with respect to the camera. Given that,
the simulator could then return the image coordinates
of the feature points at each time instantt. The camera
simulator was also implemented as a C++ class.
The second simulator is a program provided by
Kawasaki Japan. This simulator can execute the ex-
act same software as the real robot and therefore it
allowed for the testing of the code used to move the
real robot. This code is responsible for performing
the forward and inverse kinematics, as well as the dy-
namics of the robot.
The basic structure of the simulink model can be
seen in Figure 2.
In this work, we will not report the resultis from
the tests with the real robot. So, in order to demon-
strate the system in a more realistic setting, noise was
added to the image processing algorithm and a time
discretization of the image acquisition was introduced
to simulate the camera.
3.1 Describing the Pose and Velocity of
Objects
The position and orientation (pose) of a rigid ob-
ject in space can be described by the pose of an at-
tached coordinate frame. There are several possible
notations to represent the pose of a target coordinate
frame with respect to a reference one, including the
homogeneous transformation matrix, Euler Angles,
etc. (Saeed, 2001) and (Spong and Vidyasagar, 1989).
Since we were using the Kawasaki robot and simula-
tor, we adopted the XYZOAT notation as defined by
Kawasaki. In that system, the pose of a frame F with
respect to a reference frame
∗
F is described by three
translational and three rotational parameters. That is,
the cartesian coordinates X, Y, and Z, plus the Orien-
tation, Approach, and Tool angles in the vector form:
X =
x y z φ θ ψ
T
This notation is equivalent to the homogeneous
transformation matrix:
H =
CφCθCψ−SθSψ −CφCθSψ− SφCψ CφSθ x
SφCθCψ+ CθSψ −SφCθSψ+ CφCψ SφSθ y
−SθCψ SθSψ Cθ z
0 0 0 1
(15)
ICINCO 2008 - International Conference on Informatics in Control, Automation and Robotics
290