• We verify which of all these 3D points is seen by
the four cameras.
• For each point that is seen by two or more cam-
eras, we project the vectors from the optical center
of each camera until the respective 3D point. We
calculate the angle between all cameras and this is
saved in a data structure.
• In the end, for each 3D point we choose the angle
that is closest to 90deg.
6 CONCLUSIONS
We have presented in this paper an autonomous sys-
tem for the detection of the objects of interest in a
robotic soccer game, based on the use of multiple dig-
ital cameras. We presented preliminary results on the
triangulation of the information acquired from two
cameras, applied to the detection of the soccer ball.
These results show errors of the orders of millimeters
for the detection of the center of the ball. Moreover,
we proposed the use of this system with three or four
digital cameras, whose strategic positions on the field
have been thoroughly researched in order to guaran-
tee an optimal joint field of view. We are confident
that these configurations can lead to even better re-
sults on the object detection and this will be the future
step in the development of this system. The final and
complete system is intended as a ground truth vision
system that can be used for the validation of robotic
vision systems in soccer games.
ACKNOWLEDGEMENTS
This work was developed in the Institute of Electronic
and Telematic Engineering of University of Aveiro
and was partially supported by FEDER through
the Operational Program Competitiveness Factors -
COMPETE FCOMP-01-0124-FEDER-022682 (FCT
reference PEst-C/EEI/UI0127/2011) and by National
Funds through FCT - Foundation for Science and
Technology in a context of a PhD Grant (FCT ref-
erence SFRH/BD/85855/2012).
REFERENCES
Ahmad, A., Xavier, J., Santos Victor, J., and Lima, P.
(2014). 3d to 2d bijection for spherical objects under
equidistant fisheye projection. 125(1):172–183.
Bouguet, J.-Y. (2014). Camera Calibration Toolbox for
Matlab.
Design, V. S. (2014). Choosing a 3D vision system for auto-
mated robotics applications - Vision Systems Design.
Kaehler, A. and Gary, B. (2013). Learning OpenCV.
O’Reilly Media.
Neves, A. J. R., Trifan, A., and Cunha, B. (2014). UAVi-
sion: A modular time-constrained vision library for
color-coded object detection. Lecture Notes in Com-
puter Science (including subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinfor-
matics), 8641 LNCS:351–362.
Puwein, J., Ziegler, R., Vogel, J., and Pollefeys, M.
(2011). Robust multi-view camera calibration for
wide-baseline camera networks. 2011 IEEE Workshop
on Applications of Computer Vision, WACV 2011,
pages 321–328.
Qian, N. (1997). Binocular Disparity and the Perception of
Depth.
Ramesh Jain Rangachar Kasturi, B. G. S. (1995). Machine
Vision. McGraw-Hill, Inc.
Silva, H., Dias, A., Almeida, J., Martins, A., and Silva,
E. (2012). Real-time 3d ball trajectory estimation for
robocup middle size league using a single camera. In
Rfer, T., Mayer, N., Savage, J., and Saranl, U., ed-
itors, RoboCup 2011: Robot Soccer World Cup XV,
volume 7416 of Lecture Notes in Computer Science,
pages 586–597. Springer Berlin Heidelberg.
Yamada A, S. Y. M. J. (2002). Tracking Players and a
Ball in Video Image Sequence and Estimating Cam-
era Parameters for 3D Interpretation of Soccer Games.
Pattern Recognition, 2002. Proceedings. 16th Interna-
tional Conference on (Volume:1 ), vol.1(1):303–306.
Zhang, Z. (2000). A flexible new technique for camera cal-
ibration. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 22(11):1330–1334.
A Ground Truth Vision System for Robotic Soccer
689