
national conference on computer vision, pages 9297–
9307.
Caccia, L., Van Hoof, H., Courville, A., and Pineau, J.
(2019). Deep generative modeling of lidar data. In
2019 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 5034–5040.
IEEE.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E.,
Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Bei-
jbom, O. (2020). nuscenes: A multimodal dataset for
autonomous driving. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recogni-
tion, pages 11621–11631.
Dai, W., Chen, S., Huang, Z., Xu, Y., and Kong, D. (2022).
Lidar intensity completion: Fully exploiting the mes-
sage from lidar sensors. Sensors, 22(19):7533.
Elmquist, A. and Negrut, D. (2020). Methods and mod-
els for simulating autonomous vehicle sensors. IEEE
Transactions on Intelligent Vehicles, 5(4):684–692.
Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready
for autonomous driving? the kitti vision benchmark
suite. In 2012 IEEE conference on computer vision
and pattern recognition, pages 3354–3361. IEEE.
Guillard, B., Vemprala, S., Gupta, J. K., Miksik, O., Vineet,
V., Fua, P., and Kapoor, A. (2022). Learning to sim-
ulate realistic lidars. In 2022 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS),
pages 8173–8180. IEEE.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017a).
Image-to-image translation with conditional adversar-
ial networks. In Proceedings of the IEEE conference
on computer vision and pattern recognition, pages
1125–1134.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017b).
Image-to-image translation with conditional adversar-
ial networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR).
Li, B., Zhang, T., and Xia, T. (2016). Vehicle detection
from 3d lidar using fully convolutional network. arXiv
preprint arXiv:1608.07916.
Li, Y. and Ibanez-Guzman, J. (2020). Lidar for autonomous
driving: The principles, challenges, and trends for au-
tomotive lidar and perception systems. volume 37,
pages 50–61. IEEE.
Liang, Z., Zhang, M., Zhang, Z., Zhao, X., and Pu, S.
(2020). Rangercnn: Towards fast and accurate 3d ob-
ject detection with range image representation. arXiv
preprint arXiv:2009.00206.
Manivasagam, S., Wang, S., Wong, K., Zeng, W.,
Sazanovich, M., Tan, S., Yang, B., Ma, W.-C., and
Urtasun, R. (2020). Lidarsim: Realistic lidar simu-
lation by leveraging the real world. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 11167–11176.
Marcus, R., Knoop, N., Egger, B., and Stamminger, M.
(2022). A lightweight machine learning pipeline for
lidar-simulation. arXiv preprint arXiv:2208.03130.
Meyer, G. P., Laddha, A., Kee, E., Vallespi-Gonzalez, C.,
and Wellington, C. K. (2019). Lasernet: An effi-
cient probabilistic 3d object detector for autonomous
driving. In Proceedings of the IEEE/CVF conference
on computer vision and pattern recognition, pages
12677–12686.
Mok, S.-c. and Kim, G.-w. (2021). Simulated intensity
rendering of 3d lidar using generative adversarial net-
work. In 2021 IEEE International Conference on Big
Data and Smart Computing (BigComp), pages 295–
297. IEEE.
Nakashima, K. and Kurazume, R. (2021). Learning to drop
points for lidar scan synthesis. In 2021 IEEE/RSJ In-
ternational Conference on Intelligent Robots and Sys-
tems (IROS), pages 222–229. IEEE.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-
net: Convolutional networks for biomedical image
segmentation. In Medical Image Computing and
Computer-Assisted Intervention–MICCAI 2015: 18th
International Conference, Munich, Germany, October
5-9, 2015, Proceedings, Part III 18, pages 234–241.
Springer.
Royo, S. and Ballesta-Garcia, M. (2019). An overview of
lidar imaging systems for autonomous vehicles. Ap-
plied sciences, 9(19):4093.
Saleh, K., Hossny, M., Abobakr, A., Attia, M., and Iskan-
der, J. (2023). Voxelscape: Large scale simulated
3d point cloud dataset of urban traffic environments.
IEEE Transactions on Intelligent Transportation Sys-
tems, pages 1–14.
Schwarting, W., Alonso-Mora, J., and Rus, D. (2018). Plan-
ning and decision-making for autonomous vehicles.
Annual Review of Control, Robotics, and Autonomous
Systems, 1:187–210.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Pat-
naik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine,
B., et al. (2020). Scalability in perception for au-
tonomous driving: Waymo open dataset. In Proceed-
ings of the IEEE/CVF conference on computer vision
and pattern recognition, pages 2446–2454.
Tatoglu, A. and Pochiraju, K. (2012). Point cloud segmen-
tation with lidar reflection intensity behavior. In 2012
IEEE International Conference on Robotics and Au-
tomation, pages 786–790. IEEE.
Vacek, P., Ja
ˇ
sek, O., Zimmermann, K., and Svoboda, T.
(2021). Learning to predict lidar intensities. IEEE
Transactions on Intelligent Transportation Systems,
23(4):3556–3564.
Wang, W. and Shen, J. (2017). Deep visual attention pre-
diction. IEEE Transactions on Image Processing,
27(5):2368–2378.
Wang, Y., Shi, T., Yun, P., Tai, L., and Liu, M.
(2018). Pointseg: Real-time semantic segmenta-
tion based on 3d lidar point cloud. arXiv preprint
arXiv:1807.06288.
Wang, Z., Fu, H., Wang, L., Xiao, L., and Dai, B. (2019).
Scnet: Subdivision coding network for object detec-
tion based on 3d point cloud. IEEE Access, 7:120449–
120462.
Yang, B., Liang, M., and Urtasun, R. (2018a). Hdnet: Ex-
ploiting hd maps for 3d object detection. In Confer-
ence on Robot Learning, pages 146–155. PMLR.
Toward Physics-Aware Deep Learning Architectures for LiDAR Intensity Simulation
55