Authors:
Mohamed Sabry
1
;
Ahmed Hussein
2
;
Amr Elmougy
1
and
Slim Abdennadher
1
Affiliations:
1
Computer Engineering Department, German University in Cairo (GUC), Cairo, Egypt
;
2
IAV GmbH, Intelligent Systems Functions Department, Berlin, Germany
Keyword(s):
Computer Vision, Image Processing, Radars, Point Clouds, Object Detection and Classification.
Abstract:
Perception and scene understanding are complex modules that require data from multiple types of sensors to construct a weather-resilient system that can operate in almost all conditions. This is mainly due to drawbacks of each sensor on its own. The only sensor that is able to work in a variety of conditions is the radar. However, the sparseness of radar pointclouds from open source datasets makes it under-perform in object classification tasks. This is compared to the LiDAR, which after constraints and filtration, produces an average of 22,000 points per frame within a grid map image representation of 120 x 120 meters in the real world. Therefore, in this paper, a preprocessing module is proposed to enable the radar to partially reconnect objects in the scene from a sparse pointcloud. This adapts the radar to object classification tasks rather than the conventional uses in automotive applications, such as Adaptive Cruise Control or object tracking. The proposed module is used as pre
processing step in a Deep Learning pipeline for a classification task. The evaluation was carried out on the nuScenes dataset, as it contained both radar and LiDAR data, which enables the comparison between the performance of both modules. After applying the preprocessing module, this work managed to make the radar-based classification significantly close to the performance of the LiDAR.
(More)