ACKNOWLEDGEMENTS
This work was supported by JSPS KAKENHI Grant
Number JP18H01463.
REFERENCES
Junji EGUCHI and Koichi OZAKI, “Extraction
Method of Travelable Area by Using of 3D-laser
Scanner - Development of Autonomous Mobile Robot
for Urban Area”, Transactions of the Society of
Instrument and Control Engineers, Vol52, No3,
152/159,2016.
Hideaki Suzuki, Akihisa Oya, Shinichi Yuda, “Obstacle
Avoidance of Mobile Robot Considering 3D Shape of
Environment”, Robomec , 1998.
X. Li and R. Belaroussi, “Semi-dense 3d semantic. mapping
from monocular slam”, Computer Vision and Pattern
Recognition, 2016.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed and
C. Y. Fu, et al, SSD: Single Shot MultiBox Detector.
European Conference on Computer Vision, Springer
International Publishing, 21–37, 2016.
Mingyuan Mao, Hewei Zhang, Simeng Li, and Baochang
Zhang, “SEMANTIC-RTAB-MAP (SRM): A semantic
SLAM system with CNNs on depth images”,
Mathematical Foundations of Computing, 2019.
R. Q. Charles, H. Su, K. Mo and L. J. Guibas, “Pointnet:
Deep learning on point sets for 3d classification and
segmentation”, IEEE Conference on Computer Vision
and Pattern Recognition, 2017.
M. Labbé and F. Michaud, “RTAB-Map as an Open-Source
Lidar and Visual SLAM Library for Large-Scale and
Long-Term Online Operation,” in Journal of Field
Robotics, vol. 36, no. 2, pp. 416–446, 2019.
M. Labbé and F. Michaud, “Long-term online multi-session
graph-based SPLAM with memory management,” in
Autonomous Robots, vol. 42, no. 6, pp. 1133-1150,
2018.
M. Labbé and F. Michaud, “Online Global Loop Closure
Detection for Large-Scale Multi-Session Graph-Based
SLAM,” in Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2014.
M. Labbé and F. Michaud, “Appearance-Based Loop
Closure Detection for Online Large-Scale and Long-
Term Operation,” in IEEE Transactions on Robotics,
vol. 29, no. 3, pp. 734-745, 2013.
M. Labbé and F. Michaud, “Memory management for real-
time appearance-based loop closure detection,” in
Proceedings of the IEEE/RSJ International Conference
on Intelligent Robots and Systems, pp. 1271–1276,
2011.
Changqian Yu et al, “BiSeNet: Bilateral segmentation
network for real-time semantic segmentation”,
European Conference on Computer Vision, pp. 325-341,
2018.
Chollet, F. “Xception: Deep Learning with Depthwise
Separable Convolutions”, IEEE Conference on
Computer Vision and Pattern Recognition, 2017.
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo
Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe
Franke, Stefan Roth, and Bernt Schiele. “The cityscapes
dataset for semantic urban scene understanding”. In
Proceed- ings of the IEEE conference on computer
vision and pattern recognition, pages 3213–3223, 2016.
Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K., and
Burgard, W. “g2o: A general framework for graph
optimization”. In Proceedings IEEE International
Conference on Robotics and Automation, pages 3607–
3613, 2011.
Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao , Sanja
Fidler, Adela Barriuso and Antonio
Torralba, ”Semantic Understanding of Scenes through
the ADE20K Dataset”, International Journal on
Computer Vision , 2018.
Kerl, C., Sturm, J., and Cremers, D., “Dense visual SLAM
for RGB-D cameras”, In Proceedings IEEE/RSJ
International Conference on Intelligent Robots and
Systems, pages 2100–2106, 2013.
Huang, A. S., Bachrach, A., Henry, P., Krainin, M.,
Maturana, D., Fox, D., and Roy, N., “Visual odometry
and mapping for autonomous flight using an RGB-D
camera”, In Proceedings International Symposium on
Robotics Research, 2011.
Mur-Artal, R. and Tard ́os, J. D., “ORB-SLAM2: An open-
source SLAM system for monocular, stereo and RGB-
D cameras”, IEEE Transactions on Robotics,
33(5):1255–1262, 2017.