the player hits the ball, one can compare the predicted
ball trajectory with the real foot trajectory and evalu-
ate the precision. This is important since ground truth
is not available.
5.3 Virtual Reality Display
The experiments above have the drawback that they
are evaluated by an expert looking at the vision sys-
tem’s output. The most direct proof that this is all
you need for playing soccer would be to give a hu-
man just that output via a head mounted display and
see whether s/he can play.
The approach is of course fascinating and direct,
but we have some concerns regardingsafety. Anyway,
this experiment becomes relevant only after we are
convinced in principle, that the system is feasible. So
this is something to worry about later.
6 CONCLUSIONS
In this position paper, we have outlined the road to a
vision system for a human-robot soccer match. We
claim that, since soccer is a rather structured environ-
ment, the basic techniques are available and the goal
could be reached within a decade. The main challenge
will be robustness, which we propose to address by
optimizing a global likelihood function working on a
history of raw images. We have outlined a sequence
of experiments to evaluate such a vision system with
data from a camera-inertial system mounted on the
head of a human soccer player.
The reason, we are confident such a system can
be realized within a decade is the insight that it does
not need general common-sense-reasoning AI. This
is good news for the RoboCup 2050 challenge. But
it suggests that, even when we meet that challenge, it
does not imply we have realized the dream of a think-
ing machine, the whole challenge had started with.
That would not be the first time.
REFERENCES
Beetz, M., Gedikli, S., Bandouch, J., Kirchlechner, B.,
v. Hoyningen-Huene, N., and Perzylo, A. (2007). Vi-
sually tracking football games based on tv broadcasts.
IJCAI 2007, Proceedings of the 20th International Joint
Conference on Artificial Intelligence, Hyderabad, India.
Beetz, M., v. Hoyningen-Huene, N., Bandouch, J., Kirch-
lechner, B., Gedikli, S., and Maldonado, A. (2006).
Camerabased observation of football games for analyz-
ing multiagent activities. In International Conference on
Autonomous Agents.
Binnig, G. (2004). Cellenger automated high content anal-
ysis of biomedical imagery.
Birbach, O. (2008). Accuracy analysis of camera-inertial
sensor based ball trajectory prediction. Master’s thesis,
Universit¨at Bremen, Mathematik und Informatik.
Burkhard, H.-D., Duhaut, D., Fujita, M., Lima, P., Murphy,
R., and Rojas, R. (2002). The Road to RoboCup 2050.
IEEE Robotics and Automation Magazine, 9(2):31–38.
Davies, E. R. (2004). Machine Vision. Theory, Algorithms,
Practicalities. Morgan Kauffmann.
Frese, U., B¨auml, B., Haidacher, S., Schreiber, G., Schae-
fer, I., H¨ahnle, M., and Hirzinger, G. (2001). Off-the-
shelf vision for a robotic ball catcher. In Proceedings
of the IEEE/RSJ International Conference on Intelligent
Robots and Systems, Maui, pages 1623 – 1629.
Guru, D. and Shekar, B. (2004). A simple and robust line
detection algorithm based on small eigenvalue analysis.
Pattern Recognition Letters, 25(1):1–13.
Haddadin, S., Laue, T., Frese, U., and Hirzinger, G. (2007).
Foul 2050: Thoughts on Physical Interaction in Human-
Robot Soccer. In Proceedings of the IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems.
Honda Worldwide Site (2007). Honda World Wide —
Asimo. http://world.honda.com/ASIMO/.
Kitano, H. and Asada, M. (1998). RoboCup Humanoid
Challenge: That’s One Small Step for A Robot, One Gi-
ant Leap for Mankind. In International Conference on
Intelligent Robots and Systems, Victoria, pages 419–424.
Kuffner, J. J., Kagami, S., Nishiwaki, K., Inaba, M., and
Inoue, H. (2002). Dynamically-stable Motion Planning
for Humanoid Robots. Auton. Robots, 12(1):105–118.
Kurlbaum, J. (2007). Verfolgung von ballflugbahnen mit
einem frei beweglichen kamera-inertialsensor. Master’s
thesis, Universit¨at Bremen, Mathematik und Informatik.
Leibe, B., Cornelis, N., Cornelis, K., and Gool, L. V.
(2007). Dynamic 3D Scene Analysis from a Moving
Vehicle. In IEEE Conference on Computer Vision and
Pattern Recognition.
Lowe, D. (2004). Distinctive image features from scale-
invariant keypoints. International Journal of Computer
Vision, 60(2):91 – 110.
Price, K. (2008). The annotated computer vision bibliogra-
phy. http://www.visionbib.com/.
Ramanan, D. and Forsyth, D. (2003). Finding and track-
ing people from the bottom up. In IEEE Conference on
Computer Vision and Pattern Recognition.
RoboCup Federation (2008). RoboCup Official Site.
http://www.robocup.org.
R¨ofer, T. et al. (2005). GermanTeam RoboCup 2005.
http://www.germanteam.org/GT2005.pdf.
Stone, P., Sutton, R. S., and Kuhlmann, G. (2005). Re-
inforcement Learning for RoboCup-Soccer Keepaway.
Adaptive Behavior, 13(3):165–188.
Ullman, S. (1995). Sequence seeking and counter streams:
A computational model for bidirectional information
flow in the visual cortex. Cerebral Cortex, 5(1):1–11.
Xsens Technologies B.V. (2007). Moven, Inertial Motion
Capture, Product Leaflet. XSens Technologies.
ICINCO 2008 - International Conference on Informatics in Control, Automation and Robotics
322