Authors:
Yifei Zhang
;
Olivier Morel
;
Marc Blanchon
;
Ralph Seulin
;
Mojdeh Rastgoo
and
Désiré Sidibé
Affiliation:
ImViA Laboratory EA 7535, ERL VIBOT CNRS 6000, Université de Bourgogne Franche-Comté and France
Keyword(s):
Semantic Segmentation, Multimodal Fusion, Deep Learning, Road Scenes.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Image and Video Analysis
;
Image Formation and Preprocessing
;
Multimodal and Multi-Sensor Models of Image Formation
;
Pattern Recognition
;
Robotics
;
Segmentation and Grouping
;
Software Engineering
Abstract:
Deep neural networks have been frequently used for semantic scene understanding in recent years. Effective and robust segmentation in outdoor scene is prerequisite for safe autonomous navigation of autonomous vehicles. In this paper, our aim is to find the best exploitation of different imaging modalities for road scene segmentation, as opposed to using a single RGB modality. We explore deep learning-based early and later fusion pattern for semantic segmentation, and propose a new multi-level feature fusion network. Given a pair of aligned multimodal images, the network can achieve faster convergence and incorporate more contextual information. In particular, we introduce the first-of-its-kind dataset, which contains aligned raw RGB images and polarimetric images, followed by manually labeled ground truth. The use of polarization cameras is a sensory augmentation that can significantly enhance the capabilities of image understanding, for the detection of highly reflective areas such
as glasses and water. Experimental results suggest that our proposed multimodal fusion network outperforms unimodal networks and two typical fusion architectures.
(More)