
tions, diffusion, non-uniform lighting, and feature fu-
sion from multiple sensor modalities to optimize per-
formance in real-world autonomous driving scenar-
ios.
ACKNOWLEDGEMENTS
The authors would like to express their gratitude to
the test engineers of CARISSMA, Christoph Trost
and Michael Graf, for their support in enabling the
successful execution of the tests, and Dr. Dagmar
Steinhauser for reviewing the dataset. The authors
thank the Bayerisches Verbundforschungsprogramm
(BayVFP) of the Freistaat Bavaria for funding the re-
search project BARCS (DIK0351) in the funding line
Digitization.
REFERENCES
Behret, V., Kushtanova, R., Fadl, I., Weber, S., Helmer,
T., and Palme, F. (2025). Sensor Calibration and Data
Analysis of the MuFoRa Dataset. Accepted at VIS-
APP 2025.
Bijelic, M., Gruber, T., Mannan, F., Kraus, F., Ritter,
W., Dietmayer, K., and Heide, F. (2020). Seeing
Through Fog Without Seeing Fog: Deep Multimodal
Sensor Fusion in Unseen Adverse Weather. In 2020
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition (CVPR), pages 11679–11689, Seat-
tle, WA, USA. IEEE.
Burnett, K., Wu, Y., Yoon, D. J., Schoellig, A. P., and
Barfoot, T. D. (2023). Are We Ready for Radar to
Replace Lidar in All-Weather Mapping and Localiza-
tion? arXiv:2203.10174 [cs].
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E.,
Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Bei-
jbom, O. (2020). nuScenes: A multimodal dataset for
autonomous driving. arXiv:1903.11027 [cs, stat].
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov,
A., and Zagoruyko, S. (2020). End-to-end object de-
tection with transformers.
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler,
M., Benenson, R., Franke, U., Roth, S., and Schiele,
B. (2016). The cityscapes dataset for semantic urban
scene understanding.
Deng, J., Shi, S., Li, P., Zhou, W., Zhang, Y., and Li, H.
(2021). Voxel R-CNN: Towards High Performance
Voxel-based 3D Object Detection. arXiv:2012.15712
[cs].
El-Shair, Z. A., Abu-raddaha, A., Cofield, A., Alawneh,
H., Aladem, M., Hamzeh, Y., and Rawashdeh, S. A.
(2024). SID: Stereo image dataset for autonomous
driving in adverse conditions.
Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013).
Vision meets robotics: The KITTI dataset. The Inter-
national Journal of Robotics Research, 32(11):1231–
1237.
Graf, M., Vriesman, D., and Brandmeier, T. (2023).
Testmethodik zur untersuchung, validierung und
absicherung von st
¨
oreinfl
¨
ussen auf umfeldsensoren
durch witterung unter reproduzierbaren bedingungen.
VDI Verlag, abs/1405.0312.
Gultepe, I., Tardif, R., Michaelides, S. C., Cermak, J., Bott,
A., Bendix, J., M
¨
uller, M. D., Pagowski, M., Hansen,
B., Ellrod, G., Jacobs, W., Toth, G., and Cober, S. G.
(2007). Fog Research: A Review of Past Achieve-
ments and Future Perspectives. Pure and Applied
Geophysics, 164(6-7):1121–1159.
Heinzler, R., Schindler, P., Seekircher, J., Ritter, W., and
Stork, W. (2019). Weather Influence and Classifica-
tion with Automotive Lidar Sensors.
Hu, X., Fu, C.-W., Zhu, L., and Heng, P.-A. (2019). Depth-
attentional features for single-image rain removal.
Jocher, G., Qiu, J., and Chaurasia, A. (2023). Ultralytics
YOLO.
Jokela, M., Kutila, M., and Pyyk
¨
onen, P. (2019). Test-
ing and Validation of Automotive Point-Cloud Sen-
sors in Adverse Weather Conditions. Applied Sci-
ences, 9:2341.
Kenk, M. A. and Hassaballah, M. (2020). DAWN: Ve-
hicle Detection in Adverse Weather Nature Dataset.
arXiv:2008.05402 [cs].
Lakra, K. and Avishek, K. (2022). A review on factors
influencing fog formation, classification, forecasting,
detection and impacts. Rendiconti Lincei. Scienze
Fisiche e Naturali, 33(2):319–353.
Liao, Y., Xie, J., and Geiger, A. (2022). KITTI-360: A
Novel Dataset and Benchmarks for Urban Scene Un-
derstanding in 2D and 3D. arXiv:2109.13410 [cs].
Mao, J., Niu, M., Jiang, C., Liang, H., Chen, J., Liang, X.,
Li, Y., Ye, C., Zhang, W., Li, Z., Yu, J., Xu, H., and
Xu, C. (2021). One Million Scenes for Autonomous
Driving: ONCE Dataset. arXiv:2106.11037 [cs].
Marathe, A., Ramanan, D., Walambe, R., and Kotecha,
K. (2023). WEDGE: A multi-weather autonomous
driving dataset built from generative vision-language
models. arXiv:2305.07528 [cs].
Miclea, R.-C., Dughir, C., Alexa, F., Sandru, F., and Silea,
I. (2020). Laser and LIDAR in a System for Visibil-
ity Distance Estimation in Fog Conditions. Sensors,
20(21):6322.
Ros, G., Sellart, L., Materzynska, J., Vazquez, D., and
Lopez, A. M. (2016). The SYNTHIA dataset: A
large collection of synthetic images for semantic seg-
mentation of urban scenes. In 2016 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR),
pages 3234–3243. IEEE.
Sakaridis, C., Dai, D., and Van Gool, L. (2021). ACDC:
The adverse conditions dataset with correspondences
for semantic driving scene understanding. In 2021
IEEE/CVF International Conference on Computer Vi-
sion (ICCV), pages 10745–10755. IEEE.
Sezgin, F., Vriesman, D., Steinhauser, D., Lugner, R., and
Brandmeier, T. (2023). Safe Autonomous Driving
Environment Setup and Model Benchmark of the MuFoRa Dataset
735