40, 60 and 115 pixels, and d
−
, d
0
, d
+
to 0.3m, 0.4m,
and 0.5m. For each numerical scheme (Euler, RK4,
ABM and BDF), we have performed the same navi-
gation task, starting from the same situation and using
the same s
⋆
. Figure 6(c) shows that the BDF are the
most efficient scheme, while ABM is the worst, RK4
and Euler giving correct results for this task. Indeed,
as the obstacle avoidance induces important varia-
tions in the camera motion, ODE (2) becomes stiff,
and the BDF have been proven to be more suitable in
such cases. Figures 6(a) and 6(b) show the simulation
results obtained using this last scheme. The task is
perfectly performed despite the wall and the circular
obstacle. The different phases of the motion can be
seen on the evolution of µ
coll
and µ
occ
. At the begin-
ning of the task, there is no risk of collision, nor oc-
clusion, and the robot is driven by ˙q
VS
. When it enters
the wall neighborhood, µ
coll
increases and ˙q
coll
is ap-
plied to the robot which follows the security envelope
ξ
0
while centering the landmark. When the circular
obstacle enters the camera field of view, µ
occ
increases
and the pan-platform control smoothly switches from
ϖ
coll
to
e
ϖ
coll
. It is then possible to move along the
security envelope ξ
0
while tracking a “virtual” target
until the end of the occlusion. When there is no more
danger, the control switches back to ˙q
VS
and the robot
perfectly realizes the desired task.
4 CONCLUSIONS
In this paper, we have proposed to apply classical
numerical integration algorithms to determine visual
features whenever unavailable during a vision-based
task. The obtained algorithms have been validated
both in simulation and experimentation with interest-
ing results. A comparative analysis has been per-
formed and has shown that the BDF is particularly
efficient when ODE (2) becomes stiff while giving
correct results in more common use. Therefore, it ap-
pears to be the most interesting scheme.
REFERENCES
Benhimane, S. and Malis, E. (2003). Vision-based control
with respect to planar and non-planar objects using a
zooming camera. In Int. Conf. on Advanced Robotics,
Coimbra, Portugal.
Cadenat, V., Sou
`
eres, P., Swain, R., and Devy, M. (1999).
A controller to perform a visually guided tracking task
in a cluttered environment. In Int. Conf. on Intelligent
Robots and Systems, Korea.
Chaumette, F. (2004). Image moments: a general and useful
set of features for visual servoing. Trans. on Robotics
and Automation, 20(4):713–723.
Corke, P. (1996). Visual control of robots : High perfor-
mance visual servoing. Research Studies Press LTD.
Corke, P. and Hutchinson, S. (2001). A new partitioned
approach to image-based visual servo control. Trans.
on Robotics and Automation, 17:507–515.
Espiau, B., Chaumette, F., and Rives, P. (1992). A new
approach to visual servoing in robotics. Trans. on
Robotics and Automation.
Fleury, S. and Herrb, M. (2001). G
en
oM : User Manual.
LAAS-CNRS.
Folio, D. and Cadenat, V. (2005a). A controller to avoid
both occlusions and obstacles during a vision-based
navigation task in a cluttered environment. In Euro-
pean Control Conference(ECC-CDC’05).
Folio, D. and Cadenat, V. (2005b). Using redundancy to
avoid simultaneously occlusions and collisions while
performing a vision-based task amidst obstacles. In
European Conference on Mobile Robots, Ancona,
Italy.
Garcia-Aracil, N., Malis, E., Aracil-Santonja, R., and
Perez-Vidal, C. (2005). Continuous visual servoing
despite the changes of visibility in image features.
Trans. on Robotics and Automation, 21.
Hutchinson, S., Hager, G., and Corke, P. (1996). A tuto-
rial on visual servo control. Trans. on Robotics and
Automation.
Kyrki, V., Kragic, D., and Christensen, H. (2004). New
shortest-path approaches to visual servoing. In Int.
Conf. on Intelligent Robots and Systems.
Malis, E., Chaumette, F., and Boudet, S. (1999). 2 1/2d
visual servoing. Trans. on Robotics and Automation,
15:238–250.
Mansard, N. and Chaumette, F. (2005). A new redundancy
formalism for avoidance in visual servoing. In Int.
Conf. on Intelligent Robots and Systems, volume 2,
pages 1694–1700, Edmonton, Canada.
Marchand, E. and Hager, G. (1998). Dynamic sensor plan-
ning in visual servoing. In Int. Conf. on Robotics and
Automation, Leuven, Belgium.
Mezouar, Y. and Chaumette, F. (2002). Avoiding self-
occlusions and preserving visibility by path planning
in the image. Robotics and Autonomous Systems.
Pissard-Gibollet, R. and Rives, P. (1995). Applying vi-
sual servoing techniques to control a mobile hand-eye
system. In Int. Conf. on Robotics and Automation,
Nagoya, Japan.
Remazeilles, A., Mansard, N., and Chaumette, F. (2006).
Qualitative visual servoing: application to the visibil-
ity constraint. In Int. Conf. on Intelligent Robots and
Systems, Beijing, China.
Samson, C., Leborgne, M., and Espiau, B. (1991). Robot
Control. The Task Function Approach, volume 22 of
Oxford Engineering Series. Oxford University Press.
Shampine, L. F. and Gordon, M. K. (1975). Computer So-
lution of Ordinary Differential Equations. W. H. Free-
man, San Francisco.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
332