5.1 Further Research
For further research, there are some aspects depend-
ing on the virtual sensor that can be improved. It is
possible to further improve the particle LiDAR ap-
proach using Hardware ray traces. Additionally, in-
stead of only considering the geometry of the scene,
we plan on developing a weighting of the sensor sig-
nals based on the material of the surface to account
for a more accurate physical model. This weighting
is predestined for the depth camera based approach
explained in subsection 4.1.2 since properties of ma-
terials can be accessed in the same way as the scene
depth in Unreal Engine. Another important aspect is
reflection, based on materials as well as the surface
normal. Here, the approaches vary based on the sen-
sor type, using ray-tracing it is easy to calculate re-
flections. For the depth based approach, only single
reflections can be calculated easily since Unreal En-
gine provides a normal map of the scene similar to the
scene depth and the material properties. For all sen-
sors more work has to be invested into the noise model
to reflect physical properties of a LiDAR sensor.
REFERENCES
Dosovitskiy, A., Ros, G., Codevilla, F., L
´
opez, A. M., and
Koltun, V. (2017). Carla: An open urban driving sim-
ulator. ArXiv, abs/1711.03938.
Fang, J., Yan, F., Zhao, T., Zhang, F., Zhou, D., Yang, R.,
Ma, Y., and Wang, L. (2018). Simulating lidar point
cloud for autonomous driving using real-world scenes
and traffic flows.
Hossny, M., Saleh, K., Attia, M., Abobakr, A., and Iskan-
der, J. (2020). Fast synthetic lidar rendering via spher-
ical uv unwrapping of equirectangular z-buffer im-
ages. ArXiv, abs/2006.04345.
Hurl, B., Czarnecki, K., and Waslander, S. L. (2019). Pre-
cise synthetic image and lidar (presil) dataset for au-
tonomous vehicle perception. 2019 IEEE Intelligent
Vehicles Symposium (IV), pages 2522–2529.
ICP. Website of pcl icp implementation. https://pointcloud
s.org/documentation/classpcl 1 1 iterative closest p
oint.html. Accessed at 13.02.2023.
Manivasagam, S., Wang, S., Wong, K., Zeng, W.,
Sazanovich, M., Tan, S., Yang, B., Ma, W.-C., and Ur-
tasun, R. (2020). Lidarsim: Realistic lidar simulation
by leveraging the real world. 2020 IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 11164–11173.
Stachowiak, H. (1973). Allgemeine Modelltheorie. Springer
Verlag, Wien, New York.
Vierling, A., Sutjaritvorakul, T., and Berns, K. (2019).
Dataset generation using a simulated world. In Berns,
K. and G
¨
orges, D., editors, Advances in Service and
Industrial Robotics Proceedings of the 28th Interna-
tional Conference on Robotics in Alpe-Adria-Danube
Region (RAAD 2019), pages 505–513.
Wang, D., Watkins, C., and Xie, H. (2020). Mems mirrors
for lidar: A review. Micromachines, 11.
Wang, Y., Chao, W.-L., Garg, D., Hariharan, B., Camp-
bell, M. E., and Weinberger, K. Q. (2019). Pseudo-
lidar from visual depth estimation: Bridging the gap
in 3d object detection for autonomous driving. 2019
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition (CVPR), pages 8437–8445.
Wolf, P., Groll, T., Hemer, S., and Berns, K. (2020). Evo-
lution of robotic simulators: Using UE 4 to enable
real-world quality testing of complex autonomous
robots in unstructured environments. In Rango,
F. D.,
¨
Oren, T., and Obaidat, M., editors, Proceed-
ings of the 10th International Conference on Simula-
tion and Modeling Methodologies, Technologies and
Applications (SIMULTECH 2020), pages 271–278.
INSTICC, SCITEPRESS – Science and Technology
Publications, Lda. ISBN: 978-989-758-444-2, DOI
10.5220/0009911502710278, Best Poster Award.
SIMULTECH 2023 - 13th International Conference on Simulation and Modeling Methodologies, Technologies and Applications
80