REFERENCES
Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Jour-
nal of Software Tools.
Chahine, G. and Pradalier, C. (2018). Survey of monoc-
ular SLAM algorithms in natural environments. In
2018 15th Conference on Computer and Robot Vision
(CRV), pages 345–352.
Civera, J., Davison, A. J., and Montiel, J. M. (2008). Inverse
depth parametrization for monocular SLAM. IEEE
transactions on robotics, 24(5):932–945.
Engel, J., Koltun, V., and Cremers, D. (2017). Direct sparse
odometry. IEEE transactions on pattern analysis and
machine intelligence, 40(3):611–625.
Engel, J., Usenko, V., and Cremers, D. (2016). A photo-
metrically calibrated benchmark for monocular visual
odometry. In arXiv:1607.02555.
Ferrera, M., Eudes, A., Moras, J., Sanfourche, M., and
Besnerais, G. L. (2021). OV
2
SLAM : A fully online
and versatile visual slam for real-time applications.
Forster, C., Pizzoli, M., and Scaramuzza, D. (2014). SVO:
Fast semi-direct monocular visual odometry. In 2014
IEEE international conference on robotics and au-
tomation (ICRA), pages 15–22. IEEE.
Forster, C., Zhang, Z., Gassner, M., Werlberger, M., and
Scaramuzza, D. (2017). SVO: Semidirect visual
odometry for monocular and multicamera systems.
IEEE Transactions on Robotics, 33(2):249–265.
G
´
alvez-L
´
opez, D. and Tard
´
os, J. D. (2012). Bags of binary
words for fast place recognition in image sequences.
IEEE Transactions on Robotics, 28(5):1188–1197.
Gao, X., Wang, R., Demmel, N., and Cremers, D. (2018).
LDSO: Direct sparse odometry with loop closure. In
2018 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 2198–2204.
IEEE.
Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready
for autonomous driving? the KITTI vision benchmark
suite. In Conference on Computer Vision and Pattern
Recognition (CVPR).
Klein, G. and Murray, D. (2007). Parallel tracking and map-
ping for small AR workspaces. In 2007 6th IEEE
and ACM international symposium on mixed and aug-
mented reality, pages 225–234. IEEE.
Lee, S. H. and Civera, J. (2018). Loosely-coupled semi-
direct monocular ”slam”. IEEE Robotics and Automa-
tion Letters, 4(2):399–406.
Loo, S. Y., Amiri, A. J., Mashohor, S., Tang, S. H.,
and Zhang, H. (2019). CNN-SVO: Improving the
mapping in semi-direct visual odometry using single-
image depth prediction. In 2019 International Con-
ference on Robotics and Automation (ICRA), pages
5218–5223. IEEE.
Lothe, P., Bourgeois, S., Dekeyser, F., Royer, E., and
Dhome, M. (2009). Towards geographical refer-
encing of monocular SLAM reconstruction using 3d
city models: Application to real-time accurate vision-
based localization. In 2009 IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 2882–
2889.
Mur-Artal, R., Montiel, J. M. M., and Tardos, J. D.
(2015). ORB-SLAM: a versatile and accurate monoc-
ular SLAM system. IEEE transactions on robotics,
31(5):1147–1163.
Mur-Artal, R. and Tard
´
os, J. D. (2017). ORB-SLAM2:
An open-source SLAM system for monocular, stereo,
and RGB-d cameras. IEEE Transactions on Robotics,
33(5):1255–1262.
Newcombe, R. A., Lovegrove, S. J., and Davison, A. J.
(2011). DTAM: Dense tracking and mapping in real-
time. In 2011 international conference on computer
vision, pages 2320–2327. IEEE.
Singandhupe, A. and La, H. M. (2019). A review of SLAM
techniques and security in autonomous driving. In
2019 Third IEEE International Conference on Robotic
Computing (IRC), pages 602–607.
Strasdat, H., Montiel, J., and Davison, A. J. (2010). Scale
drift-aware large scale monocular SLAM. Robotics:
Science and Systems VI, 2(3):7.
von Stumberg, L., Usenko, V., Engel, J., St
¨
uckler, J., and
Cremers, D. (2017). From monocular SLAM to au-
tonomous drone exploration. In 2017 European Con-
ference on Mobile Robots (ECMR), pages 1–8.
VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications
700