trajectory length (Chesi and Vicino, 2004), control in-
variant to intrinsic parameters (Malis, 2004), use of
complex image features via image moments (Tahri
and Chaumette, 2005), global motion plan via naviga-
tion functions (Cowanand Chang, 2005), use of cylin-
drical coordinate systems (Iwatsuki and Okiyama,
2005), enlargement of stability regions (Tarbouriech
et al., 2005), and model-less control (Miura et al.,
2006).
Path-planning strategies have also been proposed
in order to take into account multiple constraints, such
as limited field of view of the camera and limited
workspace of the robot. See for instance (Mezouar
and Chaumette, 2002; Park and Chung, 2003; Deng
et al., 2005; Allotta and Fioravanti, 2005; Yao and
Gupta, 2007; Kazemi et al., 2009) and references
therein. These methods generally adopt potential
fields along a reference trajectory in order to fulfill the
required constraints, in particular the potential fields
do not affect the chosen reference trajectory wher-
ever the constraints are not violated, while they make
the camera deviating from this path wherever a con-
straint does not hold. The planned trajectory is then
followed by tracking the image projection of this tra-
jectory through an image-based controller such as the
one proposed in (Mezouar and Chaumette, 2002).
In this paper we propose the use of a parametriza-
tion of the trajectories connecting the initial location
to the desired one, together with the use of dedicated
optimization techniques for identifying the trajecto-
ries which satisfy the required constraints. Specif-
ically, this parametrization is obtained by estimat-
ing the camera pose existing between these two lo-
cations and by estimating the position of the object
points in the three-dimensional space. These estima-
tions are performed by exploiting the available im-
age point correspondences between the initial and de-
sired views, and by exploiting the available estimate
of the camera intrinsic parameters. Then, typical tra-
jectory constraints such as the limited field of view
of the camera and the limited workspace of the robot,
are formulated in terms of positivity of certain poly-
nomials. The positivity of these polynomials is then
imposed by using some suitable relaxations for con-
strained optimization. These relaxations can be for-
mulated in terms of LMIs (linear matrix inequalities),
whose feasibility can be checked via convex program-
ming tools. Some examples are reported to illustrate
the application of the proposed approach.
This paper extends our previous works (Chesi and
Hung, 2007), where a path-planning method based on
the computation of the roots of polynomials was pro-
posed (the advantage with respect to this method is
the use of LMIs), and (Chesi, 2009b), where a plan-
ning strategy is derived by using homogeneous forms
(the advantage with respect to this method is the use
of more general relaxations which may allow one to
take into account more complex constraints).
The organization of the paper is as follows. Sec-
tion 2 introduces the notation, problem formulation,
and some preliminaries about representation of poly-
nomials. Section 3 describes the proposed strategy for
trajectory planning. Section 4 illustrates the simula-
tion and experimental results. Lastly, Section 5 pro-
vides some final remarks.
2 PRELIMINARIES
In this section we introduce some preliminaries,
namely the notation, problem formulation, and a tool
for representing polynomials.
2.1 Notation and Problem Formulation
Let us start by introducing the notation adopted
throughout the paper:
- R: real numbers space;
- 0
n
: n × 1 null vector;
- I
n
: n × n identity matrix;
- kvk: euclidean norm of vector v.
We consider a generic stereo vision system, where
two cameras are observing a common set of object
points in the scene. The symbols F
ini
and F
des
repre-
sent the frames of the camera in the initial and desired
location respectively. These frames are expressed as
F
ini
= {R
ini
,t
ini
}
F
des
= {R
des
,t
des
}
(1)
where R
ini
,R
des
∈ R
3×3
are rotation matrices, and
t
ini
,t
des
∈ R
3
are translation vectors. These quanti-
ties R
ini
, R
des
, t
ini
and t
des
are expressed with respect
to an absolute frame, which is indicated by F
abs
.
The observed object points project on the image
plane of the camera in the initial and desired location
onto the image points p
ini
1
,. .. , p
ini
n
∈ R
3
(initial view)
and p
des
1
,. .. , p
des
n
∈ R
3
(desired view). These image
points are expressed in homogeneous coordinates ac-
cording to
p
ini
i
=
p
ini
i,1
p
ini
i,2
1
, p
des
i
=
p
des
i,1
p
des
i,2
1
. (2)
where p
ini
i,1
, p
des
i,1
∈ R are the components on the x-axis
of the image screen, while p
ini
i,2
, p
des
i,2
∈ R are those on
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
14