5.5 Continuous Optimization for
Dynamic Environments
Since a moving object is tracked the optimal place-
ment of the movable cameras should be performed
continuously through time. One camera can be placed
closer to the object in order to have larger accuracy
improvement, but if the object is moving fast it can
occur that the camera can not follow it because of
its dynamic constraints. It can be a better solution
to place the camera further in order to be able to fol-
low the object on a longer path. These dynamic con-
straints of the movable cameras could be also taken
into account.
6 CONCLUSIONS
In this paper the multi-camera localization accuracy
and the optimal camera placement is examined. First
the camera model is formulated. The localization ac-
curacy is defined for one camera observing one single
point. A method for calculating the localization ac-
curacy using multiple cameras is given. All the cal-
culations are performed in 2D and they are extended
later into 3D. Two measures are defined. Their ben-
efits and disadvantages are compared. In both cases
the objective function is calculated in case of adding
a new camera to the system. The optimization of the
placement of the new camera is discussed. The gen-
eral extension into 3D is described. Finally, the future
plans are formulated.
ACKNOWLEDGEMENTS
This work was partially supported by the Euro-
pean Union and the European Social Fund through
project FuturICT.hu (grant no.: TAMOP-4.2.2.C-
11/1/KONV-2012-0013) organized by VIKING Zrt.
Balatonf¨ured.
This work was partially supported by the Hungar-
ian Government, managed by the National Develop-
ment Agency, and financed by the Research and Tech-
nology Innovation Fund (grant no.: KMR 12-1-2012-
0441).
REFERENCES
Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Jour-
nal of Software Tools.
Bradski, G. and Kaehler, A. (2008). Learning OpenCV.
O’Reilly Media Inc.
Ercan, A., El Gamal, A., and Guibas, L. (2007). Object
tracking in the presence of occlusions via a camera
network. In Information Processing in Sensor Net-
works, 2007. IPSN 2007. 6th International Sympo-
sium on, pages 509–518.
Hesch, J. and Roumeliotis, S. (2011). A direct least-squares
(dls) method for pnp. In Computer Vision (ICCV),
2011 IEEE International Conference on, pages 383–
390.
K¨appeler, U.-P., H¨oferlin, M., and Levi, P. (2010). 3d object
localization via stereo vision using an omnidirectional
and a perspective camera.
Lepetit, V. and Fua, P. (2006). Keypoint recognition using
randomized trees. IEEE Trans. Pattern Anal. Mach.
Intell., 28(9):1465–1479.
Moreno-Noguer, F., Lepetit, V., and Fua, P. (2007). Accu-
rate non-iterative o(n) solution to the pnp problem. In
Computer Vision, 2007. ICCV 2007. IEEE 11th Inter-
national Conference on, pages 1–8.
Oberkampf, D., DeMenthon, D. F., and Davis, L. S.
(1996). Iterative pose estimation using coplanar fea-
ture points. Comput. Vis. Image Underst., 63(3):495–
511.
Skrypnyk, I. and Lowe, D. G. (2004). Scene modelling,
recognition and tracking with invariant image fea-
tures. In Proceedings of the 3rd IEEE/ACM Interna-
tional Symposium on Mixed and Augmented Reality,
ISMAR ’04, pages 110–119, Washington, DC, USA.
IEEE Computer Society.
SMEyeL (2013). Smart Mobile Eyes for Localization
(SMEyeL).
Szal´oki, D., Kosz´o, N., Csorba, K., and Tevesz, G. (2013a).
Marker localization with a multi-camera system. In
2013 IEEE International Conference on System Sci-
ence and Engineering (ICSSE), pages 135–139.
Szal´oki, D., Kosz´o, N., Csorba, K., and Tevesz, G. (2013b).
Optimizing camera placement for localization accu-
racy. In 14th IEEE International Symposium on
Computational Intelligence and Informatics (CINTI),
pages 207–212.
Wu, Y. and Hu, Z. (2006). Pnp problem revisited. J. Math.
Imaging Vis., 24(1):131–141.
Zhou, Q. and Aggarwal, J. (2006). Object tracking in
an outdoor environment using fusion of features and
cameras. Image and Vision Computing, 24(11):1244
– 1255. Performance Evaluation of Tracking and
Surveillance.
OptimizingCameraPlacementinMotionTrackingSystems
295