per was due to Miyazaki and Masutani in 1990, where
a control scheme delivers bounded control actions be-
longing to the Transpose Jacobian-based family, phi-
losophy first introduced by (Takegaki and Arimoto,
1981). Kelly addresses the visual servoing of planar
robot manipulators under the fixed-camera configura-
tion in (Reyes, 1998). Malis et al (1999) proposed
a new approach to vision-based robot control, called
2-1/2-D visual servoing (Malis, 2005). The visual ser-
voing problem is addressed by coupling the nonlinear
control theory with a convenient representation of the
visual information used by the robot in by (Conticelli,
2001).
(Park and Lee, 2003) present in a visual servoing
control for a ball on a flat plate to track its desired
trajectory. It has bee proposed in (Kellym 1996) a
novel approach, they address the application of the
velocity field control philosophy to visual servoing of
the robot manipulator under a fixed-camera config-
uration. Schramm et al present a novel visual ser-
voing approach, aimed at controlling the so-called
extended-2D (E2D) coordinates of the points consti-
tuting a tracked target and provide simulation results
(Reyes, 1997). Malis and Benhimane (2005) present
a generic and flexible system for vision-based robot
control. their system integrates visual tracking and
visual servoing approaches in a unifying framework
(Malis, 2003).
In this paper we address the positioning problem
with fixed-camera configuration to position-based vi-
sual servoing of planar robot manipulators. Our main
contribution is the development of a new family of
position-based visual controllers supported by rigor-
ous local asymptotic stability analysis, taking into ac-
count the full nonlinear robot dynamics, and the vi-
sion model. The objective concerning the control is
defined in terms of joint coordinates which are de-
duced from visual information. In order to show the
performance of the proposed family, two members
have been experimentally tested on a two-degree-of-
freedom direct drive vertical robot arm.
This paper is organized as follows. In Section 2,
we present the robotic system model, the vision model
and the formulation of the control problem, then the
proposed visual controller is introduced and analyzed.
Section 3 presents the experimental set-up. The ex-
perimental results are described in Section 4. Finally,
we offer some conclusions in Section 5.
2 ROBOTIC SYSTEM MODEL
The robotic system considered in this paper is com-
posed by a direct drive robot and a CCD-camera
placed in the robot workspace in the fixed-camera
configuration.
2.1 Robot Dynamics
The dynamic model of a robot manipulator plays an
important role for simulation of motion, analysis of
manipulator structures, and design of control algo-
rithms. The dynamic equation of a n degrees of
freedom robot in agreement with the Euler-Lagrange
methodology (Spong, 1989), is given for
M(q)
¨
q +C(q,
˙
q)
˙
q + g(q) = τ (1)
where q is the n × 1 vector of joint displacements,
˙
q
is the n × 1 vector of joint velocities, τ is the n × 1
vector of applied torques, M(q) is the n×n symmetric
positive definite manipulator inertia matrix, C(q,
˙
q) is
the n × n matrix of centripetal and Coriolis torques,
and g(q) is the n × 1 vector of gravitational torques.
It is assumed that the robot links are joined to-
gether with revolute joints. Although the equation
of motion (1) is complex, it has several fundamen-
tal properties which can be exploited to facilitate the
control system design. For the new control scheme,
the following important property is used:
Property 1. The matrix C(q,
˙
q) and the time deriva-
tive
˙
M(q) of the inertia matrix both satisfy [12]:
˙
q
T
1
2
˙
M(q) −C(q,
˙
q)
˙
q = 0 ∀ q,
˙
q ∈ R
n
. (2)
2.1.1 Model of Direct Kinematic
Direct kinematics is a vectorial function that re-
late joint coordinates with Cartesian coordinates f :
R
n
→ R
m
where n is the number of degrees of free-
dom, and m represents the dimension of the Cartesian
coordinate frame.
The position x
R
∈ R
3
of the end-effector with re-
spect to the robot coordinate frame in terms of the
joint positions is given by: x
R
= f(q).
2.2 Vision Model
The goal of a machine vision system is to create
a model of the real world from images. A ma-
chine vision system recovers useful information on
a scene from its two-dimensional projections. Since
images are two-dimensional projections of the three-
dimensional world, this recovery requires the inver-
sion of a many-to-one mapping (see Figure 1).
Let Σ
R
= {R
1
, R
2
, R
3
} be a Cartesian frame at-
tached to the robot base, where the axes R
1
, R
2
and R
3
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
292