Improving the Egomotion Estimation by Correcting the Calibration Bias

Ivan Krešo, Siniša Šegvić


We present a novel approach for improving the accuracy of the egomotion recovered from rectified stereoscopic video. The main idea of the proposed approach is to correct the camera calibration by exploiting the known groundtruth motion. The correction is described by a discrete deformation field over a rectangular superpixel lattice covering the whole image. The deformation field is recovered by optimizing the reprojection error of point feature correspondences in neighboring stereo frames under the groundtruth motion. We evaluate the proposed approach by performing leave one out evaluation experiments on a collection of KITTI sequences with common calibration parameters, by comparing the accuracy of stereoscopic visual odometry with original and corrected calibration parameters. The results suggest a clear and significant advantage of the proposed approach. Our best algorithm outperforms all other approaches based on two-frame correspondences on the KITTI odometry benchmark.


  1. Agarwal, S., Mierle, K., and Others (2014). Ceres solver.
  2. Badino, H. and Kanade, T. (2011). A head-wearable shortbaseline stereo system for the simultaneous estimation of structure and motion. In IAPR Conference on Machine Vision Application, pages 185-189.
  3. Badino, H., Yamamoto, A., and Kanade, T. (2013). Visual odometry by multi-frame feature integration. In First International Workshop on Computer Vision for Autonomous Driving at ICCV.
  4. Diosi, A., Segvic, S., Remazeilles, A., and Chaumette, F. (2011). Experimental evaluation of autonomous driving based on visual memory and image-based visual servoing. IEEE Transactions on Intelligent Transportation Systems, 12(3):870-883.
  5. Fraundorfer, F. and Scaramuzza, D. (2012). Visual odometry: Part ii: Matching, robustness, optimization, and applications. Robotics & Automation Magazine, IEEE, 19(2):78-90.
  6. Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR).
  7. Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR).
  8. Geiger, A., Ziegler, J., and Stiller, C. (2011). Stereoscan: Dense 3d reconstruction in real-time. In IV. Karlsruhe Institute of Technology.
  9. Harris, C. and Stephens, M. (1988). A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, pages 147-152.
  10. Hartley, R. I. and Kang, S. B. (2005). Parameter-free radial distortion correction with centre of distortion estimation. In ICCV, pages 1834-1841.
  11. Howard, A. (2008). Real-time stereo visual odometry for autonomous ground vehicles. In IROS, pages 3946- 3952.
  12. Konolige, K. and Agrawal, M. (2008). Frameslam: From bundle adjustment to real-time visual mapping. Robotics, IEEE Transactions on, 24(5):1066-1077.
  13. Konolige, K., Agrawal, M., and Solà, J. (2007). Large-scale visual odometry for rough terrain. In ISRR, pages 201-212.
  14. Kreso, I., Sevrovic, M., and Segvic, S. (2013). A novel georeferenced dataset for stereo visual odometry. CoRR, abs/1310.0310.
  15. Martull, S., Peris, M., and Fukui, K. (2012). Realistic cg stereo image dataset with ground truth disparity maps. Technical report of IEICE. PRMU, 111(430):117- 118.
  16. Moravec, H. P. (1980). Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. PhD thesis, Stanford University.
  17. Moravec, H. P. (1981). Rover visual obstacle avoidance. In IJCAI, pages 785-790.
  18. Nedevschi, S., Popescu, V., Danescu, R., Marita, T., and Oniga, F. (2013). Accurate ego-vehicle global localization at intersections through alignment of visual data with digital map. IEEE Transactions on Intelligent Transportation Systems, 14(2):673-687.
  19. Nistér, D., Naroditsky, O., and Bergen, J. R. (2004). Visual odometry. In CVPR (1), pages 652-659.
  20. Scaramuzza, D. and Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE Robot. Automat. Mag., 18(4):80- 92.
  21. Sturm, P. F., Ramalingam, S., Tardif, J., Gasparini, S., and Barreto, J. (2011). Camera models and fundamental concepts used in geometric computer vision. Foundations and Trends in Computer Graphics and Vision, 6(1-2):1-183.
  22. Vogel, C., Roth, S., and Schindler, K. (2014). Viewconsistent 3d scene flow estimation over multiple frames. In ECCV, pages 263-278.
  23. Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell., 22(11):1330-1334.

Paper Citation

in Harvard Style

Krešo I. and Šegvić S. (2015). Improving the Egomotion Estimation by Correcting the Calibration Bias . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-091-8, pages 347-356. DOI: 10.5220/0005316103470356

in Bibtex Style

author={Ivan Krešo and Siniša Šegvić},
title={Improving the Egomotion Estimation by Correcting the Calibration Bias},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015)},

in EndNote Style

JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015)
TI - Improving the Egomotion Estimation by Correcting the Calibration Bias
SN - 978-989-758-091-8
AU - Krešo I.
AU - Šegvić S.
PY - 2015
SP - 347
EP - 356
DO - 10.5220/0005316103470356