Foundation (FAPESP).
REFERENCES
Bischoff, B., Nguyen-Tuong, D., Streichert, F., Ewert, M.,
and Knoll, A. (2012). Fusing vision and odometry for
accurate indoor robot localization. In 2012 12th Inter-
national Conference on Control Automation Robotics
Vision (ICARCV), pages 347–352.
Bojarski, M., Testa, D. D., Dworakowski, D., Firner, B.,
Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller,
U., Zhang, J., Zhang, X., Zhao, J., and Zieba, K.
(2016). End to end learning for self-driving cars. arXiv
preprint arXiv:1604.07316.
Borges, P. V. K., Zlot, R., and Tews, A. (2013). Integrating
off-board cameras and vehicle on-board localization
for pedestrian safety. IEEE Transactions on Intelligent
Transportation Systems, 14(2):720–730.
Bosse, M. and Zlot, R. (2009). Continuous 3d scan-matching
with a spinning 2d laser. In 2009 IEEE International
Conference on Robotics and Automation, pages 4312–
4319.
Burgard, W., Stachniss, C., Grisetti, G., Steder, B.,
K
¨
ummerle, R., Dornhege, C., Ruhnke, M., Kleiner,
A., and Tard
´
os, J. D. (2009). Trajectory-based compar-
ison of slam algorithms. In In Proc. of the IEEE/RSJ
Int. Conf. on Intelligent Robots & Systems (IROS.
Churchill, W. and Newman, P. (2012). Practice makes per-
fect? managing and leveraging visual experiences for
lifelong navigation. In Robotics and Automation
(ICRA), 2012 IEEE International Conference on, pages
4525–4532. IEEE.
Corke, P., Lobo, J., and Dias, J. (2007). An introduction to
inertial and visual sensing. The International Journal
of Robotics Research, 26(6):519–535.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. (2009). Imagenet: A large-scale hierarchical image
database. In CVPR09.
Egger, P., Borges, P. V., Catt, G., Pfrunder, A., Siegwart,
R., and Dub
´
e, R. (2018). Posemap: Lifelong, multi-
environment 3d lidar localization. In 2018 IEEE/RSJ
International Conference on Intelligent Robots and
Systems (IROS), pages 3430–3437. IEEE.
Forster, C., Carlone, L., Dellaert, F., and Scaramuzza,
D. (2017). On-manifold preintegration for real-time
visual–inertial odometry. IEEE Transactions on
Robotics, 33(1):1–21.
Furgale, P. and Barfoot, T. D. (2010). Visual teach and
repeat for long-range rover autonomy. Journal of Field
Robotics, 27(5):534–560.
Guilherme, R., Marques, F., Lourenco, A., Mendonca, R.,
Santana, P., and Barata, J. (2016). Context-aware
switching between localisation methods for robust
robot navigation: A self-supervised learning approach.
In 2016 IEEE International Conference on Systems,
Man, and Cybernetics (SMC), pages 004356–004361.
IEEE.
Julier, S. J. and Uhlmann, J. K. (2004). Unscented filtering
and nonlinear estimation. Proceedings of the IEEE,
92(3):401–422.
Keselman, L., Iselin Woodfill, J., Grunnet-Jepsen, A., and
Bhowmik, A. (2017). Intel RealSense Stereoscopic
Depth Cameras. ArXiv e-prints.
Lottes, P., Behley, J., Milioto, A., and Stachniss, C. (2018).
Fully convolutional networks with sequential informa-
tion for robust crop and weed detection in precision
farming. IEEE Robotics and Automation Letters (RA-
L), 3:3097–3104.
Lowry, S., S
¨
underhauf, N., Newman, P., Leonard, J. J., Cox,
D., Corke, P., and Milford, M. J. (2016). Visual place
recognition: A survey. IEEE Transactions on Robotics,
32(1):1–19.
Maybeck, P. S. (1990a). The Kalman Filter: An Introduction
to Concepts. In Autonomous Robot Vehicles, pages
194–204. Springer New York, New York, NY.
Maybeck, P. S. (1990b). The Kalman Filter: An Introduction
to Concepts. In Autonomous Robot Vehicles, pages
194–204. Springer New York, New York, NY.
McManus, C., Furgale, P., Stenning, B., and Barfoot, T. D.
(2012). Visual teach and repeat using appearance-
based lidar. In 2012 IEEE International Conference
on Robotics and Automation, pages 389–396.
Moore, T. and Stouch, D. (2014). A generalized extended
kalman filter implementation for the robot operating
system. In Proceedings of the 13th International Con-
ference on Intelligent Autonomous Systems (IAS-13).
Springer.
Moravec, H. P. (1980). Obstacle Avoidance and Navi-
gation in the Real World by a Seeing Robot Rover.
PhD thesis, Stanford University, Stanford, CA, USA.
AAI8024717.
Mur-Artal, R. and Tard
´
os, J. D. (2017). Orb-slam2: An open-
source slam system for monocular, stereo, and rgb-d
cameras. IEEE Transactions on Robotics, 33(5):1255–
1262.
Pfrunder, A., Borges, P. V., Romero, A. R., Catt, G., and
Elfes, A. (2017). Real-time autonomous ground ve-
hicle navigation in heterogeneous environments using
a 3d lidar. In 2017 IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems (IROS), pages
2601–2608. IEEE.
Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R.,
Geselowitz, A., Greer, T., ter Haar Romeny, B., Zim-
merman, J. B., and Zuiderveld, K. (1987). Adaptive
histogram equalization and its variations. Computer vi-
sion, graphics, and image processing, 39(3):355–368.
Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T.,
Leibs, J., Wheeler, R., and Ng, A. Y. (2009). Ros:
an open-source robot operating system. In ICRA
Workshop on Open Source Software.
Rechy Romero, A., Koerich Borges, P. V., Elfes, A., and
Pfrunder, A. (2016). Environment-aware sensor fusion
for obstacle detection. In 2016 IEEE International
Conference on Multisensor Fusion and Integration for
Intelligent Systems (MFI), pages 114–121. IEEE.
Simonyan, K. and Zisserman, A. (2014). Very Deep Convo-
lutional Networks for Large-Scale Image Recognition.
arXiv e-prints, page arXiv:1409.1556.
Suger, B., Steder, B., and Burgard, W. (2016). Terrain-
adaptive obstacle detection. In 2016 IEEE/RSJ Inter-
Environment-aware Sensor Fusion using Deep Learning
95