les can get a LiDAR like performance, if not bet-
ter, using the serial production automotive Radars and
Cameras.
6 CONCLUSION
In this paper, an enhancement preprocessing module
has been proposed for radar data to be able to en-
hance the object classification performance. The pro-
posed theory was tested on the network architecture
based on fpn resnet from (Dung, 2020) and the results
showed that indeed the proposed module provided a
surge in performance compared to just the radar data
without the module and a significant close-in on the
LiDAR performance compared to the ground-truth
data.
As for future work, the proposed algorithm is to be
extended to run on the 8 classes on the Leaderboard of
the nuScenes evaluation server. Furthermore, in order
to improve the accuracy of the object classification to
surpass the LiDAR performance, the radar data is to
be fused with the camera to surpass the LiDAR per-
formance on its own.
REFERENCES
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Li-
ong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan,
G., and Beijbom, O. (2019). nuscenes: A multi-
modal dataset for autonomous driving. arXiv preprint
arXiv:1903.11027.
Comer, M. L. and Delp III, E. J. (1999). Morphological op-
erations for color image processing. Journal of elec-
tronic imaging, 8(3):279–289.
Danzer, A., Griebel, T., Bach, M., and Dietmayer, K.
(2019). 2d car detection in radar data with pointnets.
In 2019 IEEE Intelligent Transportation Systems Con-
ference (ITSC), pages 61–66. IEEE.
Dung, N. M. (2020). Super-fast-
accurate-3d-object-detection-pytorch.
https://github.com/maudzung/Super-Fast-Accurate-
3D-Object-Detection.
Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready
for autonomous driving? the kitti vision benchmark
suite. In 2012 IEEE Conference on Computer Vision
and Pattern Recognition, pages 3354–3361. IEEE.
Kim, W., Cho, H., Kim, J., Kim, B., and Lee, S. (2020).
Yolo-based simultaneous target detection and classi-
fication in automotive fmcw radar systems. Sensors,
20(10):2897.
Knott, E. F., Schaeffer, J. F., and Tulley, M. T. (2004). Radar
cross section. SciTech Publishing.
Li, P., Zhao, H., Liu, P., and Cao, F. (2020). Rtm3d:
Real-time monocular 3d detection from object key-
points for autonomous driving. arXiv preprint
arXiv:2001.03343, 2.
Mohammed, A. S., Amamou, A., Ayevide, F. K.,
Kelouwani, S., Agbossou, K., and Zioui, N. (2020).
The perception system of intelligent ground vehicles
in all weather conditions: a systematic literature re-
view. Sensors, 20(22):6532.
Nabati, R. and Qi, H. (2020). Centerfusion: Center-based
radar and camera fusion for 3d object detection. arXiv
preprint arXiv:2011.04841.
Palffy, A., Dong, J., Kooij, J. F., and Gavrila, D. M. (2020).
Cnn based road user detection using the 3d radar cube.
IEEE Robotics and Automation Letters, 5(2):1263–
1270.
Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental
improvement. arXiv preprint arXiv:1804.02767.
Wang, L., Chen, T., Anklam, C., and Goldluecke, B. (2020).
High dimensional frustum pointnet for 3d object de-
tection from camera, lidar, and radar. In 2020 IEEE In-
telligent Vehicles Symposium (IV), pages 1621–1628.
IEEE.
VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications
542