ment, both linear and angular velocities calculated by
the controller are equal to zero. This happened be-
cause in this instant, the robot lost the information
about the person for, at least, one of the cameras of
the stereo vision system. This behavior was adopted
in order to avoid having the robot be out of control ev-
ery time the vision system loses the information about
the human. Thus, whenever the robot loses the infor-
mation about its leader, it stops and waits for a new
face detection.
Although an obstacle in the middle of the desired
trajectory, the human managed to guide the robot to
avoid this object and reach the desired point.
5 CONCLUSIONS AND FUTURE
WORK
This paper presented an approach to a formation con-
trol between a human and a mobile robot using stereo
vision. The strategy uses the recognition method pre-
sented by (Viola and Jones, 2001) in order to find the
facial features needed to estimate the human position.
A stable nonlinear controller is proposed to allow the
robot to perform the task in cooperation with a hu-
man. The effectiveness of the proposed method is
verified through experiments where a human guides
the robot from an initial position to a desired point,
when the human moves either forward or backwards.
Our future work is concerned in improving the
features detection in order to better estimate the hu-
man position and orientation. Besides, we also intend
to introduce a second robot to the formation. Thus,
the robots will be able to carry a bigger and heavier
load.
REFERENCES
Althaus, P., Ishiguro, H., Kanda, T., Miyashita, T., and
Christensen, H. (2004). Navigation for human-robot
interaction tasks. In ICRA ’04. 2004 IEEE Interna-
tional Conference on Robotics and Automation, 2004.
Proceedings., volume 2, pages 1894–1900 Vol.2.
Bicho, E. and Monteiro, S. (2003). Formation control for
multiple mobile robots: a non-linear attractor dynam-
ics approach. In Intelligent Robots and Systems, 2003.
(IROS 2003). Proceedings. 2003 IEEE/RSJ Interna-
tional Conference on, volume 2, pages 2016–2022
vol.2.
Bowling, A. and Olson, E. (2009). Human-robot team dy-
namic performance in assisted living environments.
In PETRA ’09: Proceedings of the 2nd International
Conference on PErvsive Technologies Related to As-
sistive Environments, pages 1–6, New York, NY, USA.
ACM.
Chuy, O., Hirata, Y., and Kosuge, K. (2007). Environment
feedback for robotic walking support system control.
In ICRA, pages 3633–3638.
Das, A., Fierro, R., Kumar, V., Ostrowski, J., Spletzer,
J., and Taylor, C. (2002). A vision-based formation
control framework. Robotics and Automation, IEEE
Transactions on, 18(5):813–825.
Egerstedt, M. and Hu, X. (2001). Formation constrained
multi-agent control. In Robotics and Automation,
2001. Proceedings 2001 ICRA. IEEE International
Conference on, volume 4, pages 3961–3966 vol.4.
Egerstedt, M., Hu, X., and Stotsky, A. (2001). Control of
mobile platforms using a virtual vehicle approach. Au-
tomatic Control, IEEE Transactions on, 46(11):1777–
1782.
Jadbabaie, A., Lin, J., and Morse, A. (2002). Coordination
of groups of mobile autonomous agents using nearest
neighbor rules. In Decision and Control, 2002, Pro-
ceedings of the 41st IEEE Conference on, volume 3,
pages 2953–2958 vol.3.
Pereira, F. G. (2006). Navegac¸
˜
ao e desvio de obst
´
aculos us-
ando um rob
ˆ
o m
´
ovel dotado de sensor de varredura
laser. Master’s thesis, Universidade Federal do
Esp
´
ırito Santo - UFES.
Vidal, R., Shakernia, O., and Sastry, S. (2003). Formation
control of nonholonomic mobile robots with omnidi-
rectional visual servoing and motion segmentation. In
Robotics and Automation, 2003. Proceedings. ICRA
’03. IEEE International Conference on, volume 1,
pages 584–589 vol.1.
Viola, P. and Jones, M. (2001). Rapid object detection us-
ing a boosted cascade of simple features. Proceed-
ings of the 2001 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, 2001.
CVPR 2001., 1:I–511–I–518 vol.1.
ICINCO 2010 - 7th International Conference on Informatics in Control, Automation and Robotics
140