
IMU-assisted semantic segmentation. Robotics and
Autonomous Systems, 104:1–13.
Camara, L. G., G
¨
abert, C., and P
ˇ
reu
ˇ
cil, L. (2020). Highly
robust visual place recognition through spatial match-
ing of CNN features. In IEEE International Confer-
ence on Robotics and Automation, pages 3748–3755.
Camara, L. G. and P
ˇ
reu
ˇ
cil, L. (2019). Spatio-semantic
convnet-based visual place recognition. In European
Conference on Mobile Robots, pages 1–8.
Chazal, F., Guibas, L. J., Oudot, S. Y., and Skraba, P.
(2013). Persistence-based clustering in Riemannian
manifolds. Journal of the ACM, 60(6):1–38.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K.,
and Yuille, A. L. (2017a). Deeplab: Semantic im-
age segmentation with deep convolutional nets, atrous
convolution, and fully connected crfs. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence,
40(4):834–848.
Chen, Z., Maffra, F., Sa, I., and Chli, M. (2017b). Only look
once, mining distinctive landmarks from convnet for
visual place recognition. In IEEE/RSJ International
Conference on Intelligent Robots and Systems, pages
9–16.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 248–255.
Girshick, R. (2015). Fast R-CNN. In Proceedings of the
IEEE International Conference on Computer Vision,
pages 1440–1448.
Griffith, S., Chahine, G., and Pradalier, C. (2017). Sym-
phony lake dataset. International Journal of Robotics
Research, 36(11):1151–1158.
Heidarsson, H. K. and Sukhatme, G. S. (2011). Obstacle de-
tection and avoidance for an autonomous surface ve-
hicle using a profiling sonar. In IEEE International
Conference on Robotics and Automation, pages 731–
736.
Khaliq, A., Ehsan, S., Chen, Z., Milford, M., and
McDonald-Maier, K. (2019). A holistic visual place
recognition approach using lightweight CNNs for sig-
nificant viewpoint and appearance changes. IEEE
Transactions on Robotics, 36(2):561–569.
Kristan, M., Kenk, V. S., Kova
ˇ
ci
ˇ
c, S., and Per
ˇ
s, J. (2015).
Fast image-based obstacle detection from unmanned
surface vehicles. IEEE Transactions on Cybernetics,
46(3):641–654.
Lowe, D. G. (1999). Object recognition from local scale-
invariant features. In Proceedings of the IEEE Inter-
national Conference on Computer Vision, volume 2,
pages 1150–1157.
Moosbauer, S., Konig, D., Jakel, J., and Teutsch, M. (2019).
A benchmark for deep learning based object detec-
tion in maritime environments. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition Workshops.
Onunka, C. and Bright, G. (2010). Autonomous marine
craft navigation: On the study of radar obstacle detec-
tion. In International Conference on Control Automa-
tion Robotics & Vision, pages 567–572.
Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster R-
CNN: Towards real-time object detection with region
proposal networks. Advances in Neural Information
Processing Systems, 28.
Ruiz, A. R. J. and Granja, F. S. (2009). A short-range ship
navigation system based on ladar imaging and target
tracking for improved safety and efficiency. IEEE
Transactions on Intelligent Transportation Systems,
10(1):186–197.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Steccanella, L., Bloisi, D. D., Castellini, A., and Farinelli,
A. (2020). Waterline and obstacle detection in im-
ages from low-cost autonomous boats for environ-
mental monitoring. Robotics and Autonomous Sys-
tems, 124:103346.
S
¨
underhauf, N., Shirazi, S., Jacobson, A., Dayoub, F., Pep-
perell, E., Upcroft, B., and Milford, M. (2015). Place
recognition with convnet landmarks: Viewpoint-
robust, condition-robust, training-free. Robotics: Sci-
ence and Systems, pages 1–10.
Tolias, G., Sicre, R., and J
´
egou, H. (2015). Particular ob-
ject retrieval with integral max-pooling of CNN acti-
vations. arXiv preprint arXiv:1511.05879.
Uijlings, J. R. R., Van De Sande, K. E. A., Gevers, T., and
Smeulders, A. W. M. (2013). Selective search for ob-
ject recognition. International Journal of Computer
Vision, 104:154–171.
Vo, H. V., P
´
erez, P., and Ponce, J. (2020). Toward unsu-
pervised, multi-object discovery in large-scale image
collections. In Proceedings of the European Confer-
ence on Computer Vision, pages 779–795.
Xue, J., Chen, Z., Papadimitriou, E., Wu, C., and
Van Gelder, P. H. A. J. M. (2019a). Influence of envi-
ronmental factors on human-like decision-making for
intelligent ship. Ocean Engineering, 186:106060.
Xue, J., Wu, C., Chen, Z., Van Gelder, P. H. A. J. M.,
and Yan, X. (2019b). Modeling human-like decision-
making for inbound smart ships based on fuzzy
decision trees. Expert Systems with Applications,
115:172–188.
Yan, X., Ma, F., Liu, J., and Wang, X. (2019). Applying the
navigation brain system to inland ferries. In Proceed-
ings of the Conference on Computer and IT Applica-
tions in the Maritime Industries, pages 25–27.
Zhang, X., Wang, C., Jiang, L., An, L., and Yang, R. (2021).
Collision-avoidance navigation systems for maritime
autonomous surface ships: A state of the art survey.
Ocean Engineering, 235:109380.
Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., and Oliva,
A. (2014). Learning deep features for scene recogni-
tion using places database. Advances in Neural Infor-
mation Processing Systems, 27.
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
770