(˙e = −λe), where λ is a positive scalar factor which
tunes the speed of convergence:
v = −λ (CWL)
−1
e + (CWL)
−1
CW˙s
∗
+
− (CWL)
−1
(C
˙
W +
˙
CW) (s − s
∗
(t))(14)
if C is setting to (W
∗
L
∗
)
+
,then (CWL) > 0 and
the task function converge to zero and, in the absence
of local minima and singularities, so does the error
s−s
∗
. In this case, C is constant and therefore
˙
C = 0.
Finally substituting C by (W
∗
L
∗
)
+
in equation (14),
we obtain the expression of the camera velocity that
is sent to the robot controller:
v = −(W
∗
L
∗
)
+
(λ W +
˙
W) (s − s
∗
(t)) +
+ (W
∗
L
∗
)
+
W˙s
∗
(15)
4.3 Visual servoing techniques
The visual servoing techniques used to carry out the
navigation are the image-based and the intrinsic-free
approaches. In the case of image-based visual ser-
voing approach, the control law (15) is directly ap-
plicable to assure a continuous navigation of a mobile
robot. On the other hand, when the intrinsic-free ap-
proach is used, this technique must be reformulated to
take into account the weighted features.
4.3.1 Intrinsic-free approach
The theoretical background about invariant visual ser-
voing can be extensively found in (Malis, 2002b;
Malis, 2002c). In this section, we modify the ap-
proach in order to take into account weighted features
(Garc
´
ıa et al., 2004).
Basically, the weights Φ
i
defined in the previous
subsection must be redistributed(γ
i
) in order to be
able to build the invariant projective space Q
γ
i
where
the control will be defined.
Similarly to the standard intrinsic-free visual ser-
voing, the control of the camera is achieved by
stacking all the reference points of space Q
γ
i
in a
(3n×1) vector s
∗
(ξ
∗
) = (q
∗
1
(t), q
∗
2
(t), · · · , q
∗
n
(t)).
Similarly, the points measured in the current cam-
era frame are stacked in the (3n×1) vector s(ξ) =
(q
1
(t), q
2
(t), · · · , q
n
(t)). If s(ξ) = s
∗
(ξ
∗
) then
ξ = ξ
∗
and the camera is back to the reference po-
sition whatever the camera intrinsic parameters.
In order to control the movement of the camera,
we use the control law (15) where W depends on
the weights previously defined and L is the interac-
tion matrix. The interaction matrix depends on cur-
rent normalized points m
i
(ξ) ∈ M (m
i
can be com-
puted from image points m
i
= K
−1
p
i
), on the in-
variant points q
i
(ξ) ∈ Q
γ
, on the current depth dis-
tribution z(ξ) = (Z
1
, Z
2
, ..., Z
n
) and on the current
redistributed weights γ
i
. The interaction matrix in the
weighted invariant space (L
γ
i
q i
= T
γ
i
mi
(L
mi
−C
γ
i
i
))is
obtained like in (Malis, 2002a) but the term C
γ
i
i
must be recomputed in order to take into account the
redistributed weights γ
i
.
5 EXPERIMENTS IN A VIRTUAL
INDOOR ENVIRONMENT
Exhaustive experiments have been carried out using
a virtual reality tool for modeling an indoor envi-
ronment. To make more realistic simulation, errors
in intrinsic and extrinsic parameters of the camera
mounted in the robot and noise in the extraction of
image features have been considered. An estimation
b
K of the real matrix K has been used with an error
of 25% in focal length and a deviation of 50 pixels
in the position of the optical center. Also an esti-
mation
b
T
RC
of the camera pose respect to the ro-
bot frame has been computed with a rotation error of
uθ = [3.75 3.75 3.75]
T
degr ees and translation er-
ror of t = [2 2 0]
T
cm. An error in the extraction of
current image features has been considered by adding
a normal distribution noise to the accurate image fea-
tures extracted.
In Figure 7, the control signals sent to the robot
controller using the classical image-based approach
and the image-based approach with weighted features
are shown. In Figure 7 (a,b,c), details of the control
law using the classical image-based approach, where
the discontinuities can be clearly appreciated, are pre-
sented. To show the improvements of the new formu-
lation presented in this paper, the control law using
the image-based with weighted features can be seen
in Figure 7 (d,e,f).
The same experiment, but in this case using the
intrinsic-free visual servoing approach, is performed.
In Figure 8 , the control signals sent to the robot con-
troller using the intrinsic-free approach and some de-
tails, where the discontinuities can be clearly appre-
ciated, are shown. The improvements of the new for-
mulation of the intrinsic-free approach with weighted
features are presented in Figure 9. The same details
of the control law shown in Figure 8 are presented
in Figure 9. Comparing both figures and their details,
the continuity of the control law is self-evident despite
of the noise in the extraction of image features.
Also in (Garc
´
ıa et al., 2004), a comparison between
this method and a simple filtration of the control law
was presented. The results presented in that paper cor-
roborate that the new approach with weighted features
to the problem works better than a simple filter of the
control signals.
IMAGE-BASED AND INTRINSIC-FREE VISUAL NAVIGATION OF A MOBILE ROBOT DEFINED AS A GLOBAL
VISUAL SERVOING TASK
193