Authors:
Ankit Sonthalia
1
;
2
;
Ramy Battrawy
1
;
René Schuster
1
and
Didier Stricker
1
Affiliations:
1
German Research Center for Artificial Intelligence (DFKI GmbH), Kaiserslautern, Germany
;
2
Tübingen AI Center, Eberhard Karls Universität Tübingen, Germany
Keyword(s):
LiDAR, Point Cloud, Event Camera, Bi-Directional Fusion, Attention, Scene Flow.
Abstract:
In this paper, we propose the fusion of event streams and point clouds for scene flow estimation. Bio-inspired event cameras offer significantly lower latency and higher dynamic ranges than regular RGB cameras, and are therefore appropriate for recording high-speed motions. However, events do not provide depth information, which makes them unsuitable for scene flow (3D) estimation. On the other hand, LiDAR-based approaches are well suited to scene flow estimation due to the high precision of LiDAR measurements for outdoor scenes (e.g. autonomous vehicle applications) but they fail in the presence of unstructured regions (e.g. ground surface, grass, walls, etc.). We propose our EvLiDAR-Flow, a neural network architecture equipped with an attention module for bi-directional feature fusion between an event (2D) branch and a point cloud (3D) branch. This kind of fusion helps to overcome the lack of depth information in events while enabling the LiDAR-based scene flow branch to benefit fr
om the rich motion information encoded by events. We validate the proposed EvLiDAR-Flow by showing that it performs significantly better and is robust to the presence of ground points, in comparison to a state-of-the-art LiDAR-only scene flow estimation method.
(More)