Authors:
Guus Engels
1
;
Nerea Aranjuelo
1
;
Ignacio Arganda-Carreras
2
;
Marcos Nieto
1
and
Oihana Otaegui
1
Affiliations:
1
Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, San Sebastian, Spain
;
2
Basque Country University (UPV/EHU), San Sebastian, Spain, Ikerbasque, Basque Foundation for Science, Bilbao, Spain, Donostia International Physics Center (DIPC), San Sebastian, Spain
Keyword(s):
LiDAR, 3D Object Detection, Feature Extraction, Point Cloud.
Abstract:
This paper presents a new approach to 3D object detection that leverages the properties of the data obtained by a LiDAR sensor. State-of-the-art detectors use neural network architectures based on assumptions valid for camera images. However, point clouds obtained from LiDAR data are fundamentally different. Most detectors use shared filter kernels to extract features which do not take into account the range dependent nature of the point cloud features. To show this, different detectors are trained on two splits of the KITTI dataset: close range (points up to 25 meters from LiDAR) and long-range. Top view images are generated from point clouds as input for the networks. Combined results outperform the baseline network trained on the full dataset with a single backbone. Additional research compares the effect of using different input features when converting the point cloud to image. The results indicate that the network focuses on the shape and structure of the objects, rather than e
xact values of the input. This work proposes an improvement for 3D object detectors by taking into account the properties of LiDAR point clouds over distance. Results show that training separate networks for close-range and long-range objects boosts performance for all KITTI benchmark difficulties.
(More)