6 CONCLUSION
We have presented a pipeline for data based simu-
lation of LiDAR sensor behaviour, that generalizes
rather well for synthetic data inputs in simulation en-
vironments. While the general quality is not reli-
able enough for prediction of frame-accurate sensor
artefacts, the LiDAR simulation is capable of repro-
ducing them in general. We believe it is sufficient
for many use cases, that analyse the performance of
ADAS functions over many virtual road miles. With
the rapid developments of LiDAR technology, better
data sets will emerge naturally, while existing data
sets can be enhanced with new data fusion techniques
to exploit multiple consecutive frames.
ACKNOWLEDGEMENTS
Richard Marcus was supported by the Bayerische
Forschungsstiftung (Bavarian Research Foundation)
AZ-1423-20.
REFERENCES
Automotive, I. (2020). CarMaker. Publisher: IPG Automo-
tive GmbH.
Cabon, Y., Murray, N., and Humenberger, M. (2020). Vir-
tual kitti 2.
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and
Koltun, V. (2017). CARLA: An Open Urban Driv-
ing Simulator. arXiv:1711.03938 [cs]. arXiv:
1711.03938.
Gaidon, A., Wang, Q., Cabon, Y., and Vig, E. (2016). Vir-
tual worlds as proxy for multi-object tracking analy-
sis.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial networks.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2018).
Image-to-Image Translation with Conditional Adver-
sarial Networks. arXiv:1611.07004 [cs]. arXiv:
1611.07004.
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Per-
ceptual Losses for Real-Time Style Transfer and
Super-Resolution. arXiv:1603.08155 [cs]. arXiv:
1603.08155.
Kirillov, A., Wu, Y., He, K., and Girshick, R. B. (2019).
Pointrend: Image segmentation as rendering. CoRR,
abs/1912.08193.
LG (2021). LGSVL Simulator,
https://www.lgsvlsimulator.com/.
Liao, Y., Xie, J., and Geiger, A. (2021). KITTI-360: A
novel dataset and benchmarks for urban scene under-
standing in 2d and 3d. arXiv.org, 2109.13410.
Manivasagam, S., Wang, S., Wong, K., Zeng, W.,
Sazanovich, M., Tan, S., Yang, B., Ma, W.-C.,
and Urtasun, R. (2020). LiDARsim: Realistic Li-
DAR Simulation by Leveraging the Real World.
arXiv:2006.09348 [cs]. arXiv: 2006.09348.
NVIDIA (2021). Nvidia drive sim,
https://developer.nvidia.com/drive/drive-sim.
Park, T., Liu, M.-Y., Wang, T.-C., and Zhu, J.-Y. (2019).
Semantic image synthesis with spatially-adaptive nor-
malization. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition.
Ranftl, R., Bochkovskiy, A., and Koltun, V. (2021). Vision
transformers for dense prediction. ArXiv preprint.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net:
Convolutional networks for biomedical image seg-
mentation.
Shah, S., Dey, D., Lovett, C., and Kapoor, A. (2017). Air-
Sim: High-Fidelity Visual and Physical Simulation
for Autonomous Vehicles. arXiv:1705.05065 [cs].
arXiv: 1705.05065.
Shorten, C. and Khoshgoftaar, T. M. (2019). A survey on
Image Data Augmentation for Deep Learning. Jour-
nal of Big Data, 6(1):60.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Pat-
naik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine,
B., Vasudevan, V., Han, W., Ngiam, J., Zhao, H., Tim-
ofeev, A., Ettinger, S., Krivokon, M., Gao, A., Joshi,
A., Zhao, S., Cheng, S., Zhang, Y., Shlens, J., Chen,
Z., and Anguelov, D. (2020). Scalability in Percep-
tion for Autonomous Driving: Waymo Open Dataset.
arXiv:1912.04838 [cs, stat]. arXiv: 1912.04838.
Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox,
T., and Geiger, A. (2017). Sparsity invariant cnns. In
International Conference on 3D Vision (3DV).
VTD (2021). Virtual Test Drive.
Wang, L., Goldluecke, B., and Anklam, C. (2020). L2r gan:
Lidar-to-radar translation. In Proceedings of the Asian
Conference on Computer Vision (ACCV).
Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., and
Catanzaro, B. (2018a). High-resolution image synthe-
sis and semantic manipulation with conditional gans.
In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition.
Wang, Y., Chao, W.-L., Garg, D., Hariharan, B., Camp-
bell, M., and Weinberger, K. Q. (2018b). Pseudo-
LiDAR from Visual Depth Estimation: Bridging the
Gap in 3D Object Detection for Autonomous Driving.
arXiv:1812.07179 [cs]. arXiv: 1812.07179.
Weber, M., Xie, J., Collins, M., Zhu, Y., Voigtlaender, P.,
Adam, H., Green, B., Geiger, A., Leibe, B., Cremers,
D., Osep, A., Leal-Taixe, L., and Chen, L.-C. (2021).
Step: Segmenting and tracking every pixel. In Neural
Information Processing Systems (NeurIPS) Track on
Datasets and Benchmarks.
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2020).
Unpaired Image-to-Image Translation using Cycle-
Consistent Adversarial Networks. arXiv:1703.10593
[cs]. arXiv: 1703.10593.
A Lightweight Machine Learning Pipeline for LiDAR-simulation
183