Table 1: Estimated radii vs. the number of virtual cameras.
# of virtual cameras 6 12 18 24 30 36 42
estimated radius (cm) 21.53 21.35 21.00 20.43 20.25 20.15 20.15
6n image planes; and 3) finding the intersection of the
occupancy for all 6n cameras. Table 1 shows the re-
lationship between the reconstructed radii versus the
number of cameras used for reconstruction. As ex-
pected, the error decreases as the number of cameras
increases. That is because, among other reasons, the
occupancy defined by each camera view forms a cone
in space and the intersection of any subset of camera
views approximates the sphere by the surface of such
cones. Every time a new camera is added to the sub-
set, the approximation becomes closer to the actual
shape of the sphere. Since this procedure also relies
on a sphere-fitting algorithm to circumscribe the oc-
cupied voxels, the detected radius tends always to be
larger than the actual radius. For the real data, six
cameras were used to take images of a ball. For each
image, a circular Hough transform was used to detect
the boundary and the 2D radius of the ball. As before,
we relied on a voxel carving approach to reconstruct
the ball. Figures 3(a) and (b) depict the reconstructed
sphere for both synthetic data and real data. For the
real ball, also with 20cm of radius, a 3D sphere was
fitted and the radius was estimated. The performance
of the framework using real cameras was 24.3cm. Fi-
nally, Figure 3(c) depicts the result from our multi-
view algorithm for 3D modeling.
5 CONCLUSIONS
We have presented a novel method for autonomous
camera calibration of a multi-camera rig. The exper-
imental results showed that the algorithm is vital in
order to obtain good 3D reconstruction. That is, the
algorithm’s selection of the best images for calibra-
tion leads to an improved calibration of as much as ten
times of that obtained without using the algorithm. Fi-
nally, an application for the camera rig was presented
where a sphere was placed in the middle of the rig
and a 3D representation of the same sphere was con-
structed with an error in reconstruction (real camera)
approaching the theoretical (synthetic cameras) error.
REFERENCES
Baker, A. and Aloimonos, Y. (2000). Complete calibra-
tion of a multi-camera network. In Proceedings of
IEEE International Workshop on Omnidirectional Vi-
sion. IEEE Computer Society.
Chen, K., Hung, Y., and Chen, Y. (2005). Calibrating a cam-
era network using parabolic trajectories of a bouncing
ball. In Proceedings of IEEE International Workshop
on VS-PETS. IEEE Computer Society.
Dyer, C. (2001). Volumetric scene reconstruction from mul-
tiple views. In Foundations of Image Understanding.
Kluwer.
Floyd, R. (1962). Algorithm 97: Shortest path. Commun.
ACM, 5(6):345.
Han, K. and DeSouza, G. N. (2007). A feature detection
algorithm for autonomous camera calibration. In Pro-
ceedings of Fourth International Conference on Infor-
matics in Control, Automation and Robotics.
Huang, Z. and Boufama, B. (2002). A semi-automatic cam-
era calibration method for augmented reality. In Pro-
ceedings of IEEE International Conference on System,
Man and Cybernetics. IEEE Computer Society.
Jaynes, C. (1999). Multi-view calibration from planar mo-
tion for video surveillance. In Proceedings of IEEE
International Workshop on Visual Surveillance. IEEE
Computer Society.
Koller, D., Klinker, G., Rose, E., Breen, D., Whitaker, R.,
and Tuceryan, M. (1997). Automated camera cali-
bration and 3d egomotion estimation for augmented
reality applications. In Proceedings of the 7th Inter-
national Conference on Computer Analysis of Images
and Patterns. Springer-Verlag.
Lam, D., Hong, R., and DeSouza, G. (2009). 3d hu-
man modeling using virtual multi-view stereopsis
from on-the-fly motion estimation. In Proceedings
of IEEE/RSJ International Conference on Intelligent
Robots and Systems. IEEE Computer Society.
Laurentini, A. (1994). The visual hull concept for
silhouette-based image understanding. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence.
Olsen, B. and Hoover, A. (2001). Calibrating a camera net-
work using a domino grid. Pattern Recognition, 34.
Remagnino, P. and Jones, G. (2002). Registration of surveil-
lance data for multi-camera. In IEEE International
Conference on Information Fusion. IEEE Computer
Society.
Svoboda, T., Martinec, D., and Pajdla, T. (2005). A conve-
nient multicamera self-calibration for virtual environ-
ments. MIT Press, Cambridge.
Yamazoe, H., Utsumi, A., and Abe, S. (2006). Multiple
camera calibration with bundled optimization using
silhouette geometry constraints. In Proceedings of
the 18th International Conference on Pattern Recog-
nition. IEEE Computer Society.
Zhang, Z. (2000). A flexible new technique for camera cal-
ibration. IEEE Transactions on Pattern Analysis and
Machine Intelligence.
EXPERIMENTING WITH AUTONOMOUS CALIBRATION OF A CAMERA RIG ON A VISION SENSOR
NETWORK
237