clockwise. Recognition rates were measured for dif-
ferent values of α, producing useful rates for small
inclinations: 90% (60 out of 66) for α = 20, 83.8%
(57 out of 68) for α = 40 and 9.5% (6 out of 63) for
α = 50.
6 CONCLUSIONS
four simple but useful computer vision techniques
have been described, suitable for human-robot in-
teraction. First, an omnidirectional camera setting
is described that can detect people in the surround-
ings of the robot, giving their angular positions and
a rough estimate of the distance. The device can
be easily built with inexpensive components. Sec-
ond, we comment on a color-based face detection
technique that can alleviate skin-color false positives.
Third, a simple head nod and shake detector is de-
scribed, suitable for detecting affirmative/negative,
approval/disapproval, understanding/disbelief head
gestures. The four techniques have been implemented
and tested on a prototype social robot.
ACKNOWLEDGEMENTS
This work was partially funded by research projects
UNI2005/18 of Universidad de Las Palmas de Gran
Canaria and the Spanish Ministry for Education and
Science and FEDER (TIN2004-07087).
REFERENCES
Bouguet, J. (1999). Pyramidal implementation of the Lucas
Kanade feature tracker. Technical report, Intel Corpo-
ration, Microprocessor Research Labs, OpenCV doc-
uments.
Breazeal, C. L. (2002). Designing social robots. MIT Press,
Cambridge, MA.
Collins, G. and Dennis, L. A. (2000). System description:
Embedding verification into microsoft excel. In Con-
ference on Automated Deduction, pages 497–501.
Darrell, T., Gordon, G., Harville, M., and Woodfill, J.
(1998). Integrated person tracking using stereo, color,
and pattern detection. In Procs. of IEEE Computer
Society Conference on Computer Vision and Pattern
Recognition, pages 601–608, Santa Barbara, CA.
Davis, J. and Vaks, S. (2001). A perceptual user interface
for recognizing head gesture acknowledgements. In
Proc. of ACM Workshop on Perceptual User Inter-
faces, Orlando, Florida.
Deniz, O. (2006). An Engineering Approach to Sociable
Robots. PhD thesis, Department of Computer Science,
Universidad de Las Palmas de Gran Canaria.
Fong, T., Nourbakhsh, I., and Dautenhahn, K. (2003). A
survey of socially interactive robots. Robotics and Au-
tonomous Systems, 42(3-4).
Grange, S., Casanova, E., Fong, T., and Baur, C. (2002).
Vision based sensor fusion for human-computer inter-
action. In Procs. of IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems. Lausanne,
Switzerland.
Kahn, R. (1996). Perseus: An Extensible Vision System for
Human-Machine Interaction. PhD thesis, University
of Chicago.
Kapoor, A. and Picard, R. (2001). A real-time head nod and
shake detector. In Proceedings from the Workshop on
Perspective User Interfaces.
Krumm, J., Harris, S., Meyers, B., Brumitt, B., Hale, M.,
and Shafer, S. (2000). Multi-camera multi-person
tracking for easyliving. In 3rd IEEE International
Workshop on Visual Surveillance, Dublin, Ireland.
M. Castrillon-Santana, H. Kruppa, C. G. and Hernandez, M.
(2005). Towards real-time multiresolution face/head
detection. Revista Iberoamericana de Inteligencia Ar-
tificial, 9(27):63–72.
Maxwell, B. (2003). A real-time vision module for interac-
tive perceptual agents. Machine Vision and Applica-
tions, (14):72–82.
Maxwell, B., Meeden, L., Addo, N., Brown, L., Dickson, P.,
Ng, J., Olshfski, S., Silk, E., and Wales, J. (1999). Al-
fred: the robot waiter who remembers you. In Procs.
of AAAI Workshop on Robotics.
Moreno, F., Andrade-Cetto, J., and Sanfeliu, A. (2001). Lo-
calization of human faces fusing color segmentation
and depth from stereo. In Procs. ETFA, pages 527–
534.
Schulte, J., Rosenberg, C., and Thrun, S. (1999). Spon-
taneous short-term interaction with mobile robots in
public places. In Procs. of the IEEE Int. Conference
on Robotics and Automation.
Swain, M. and Ballard, D. (1991). Color indexing. Int.
Journal on Computer Vision, 7(1):11–32.
USEFUL COMPUTER VISION TECHNIQUES FOR A ROBOTIC HEAD
389