
ACKNOWLEDGEMENT
The authors thank the Bayerisches Verbund-
forschungsprogramm (BayVFP) of the Free State
of Bavaria for funding the research project BARCS
(DIK0351) in the funding line Digitization and the
research center CARISSMA.
REFERENCES
Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M.
Optuna: A next-generation hyperparameter optimiza-
tion framework. https://doi.org/10.48550/arXiv.1907.
10902
Ayala, R. and Mohd, T. K. (2021). Sensors in autonomous
vehicles: A survey. Journal of Autonomous Vehicles
and Systems, 1(3):031003.
Bengler, K., Dietmayer, K., Farber, B., Maurer, M., Stiller,
C., and Winner, H. (2014). Three decades of driver
assistance systems: Review and future perspectives.
IEEE Intelligent transportation systems magazine,
6(4):6–22.
Bentley, J. L. Multidimensional binary search trees
used for associative searching. Communications
of the ACM, 18(9):509–517. https://doi.org/10.1145/
361002.361007
Bijelic, M., Gruber, T., Mannan, F., Kraus, F., Ritter, W.,
Dietmayer, K., and Heide, F. (2020). Seeing through
fog without seeing fog: Deep multimodal sensor fu-
sion in unseen adverse weather. Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 11682–11692.
Bijelic, M., Gruber, T., and Ritter, W. A benchmark for
lidar sensors in fog: Is detection breaking down? 2018
IEEE Intelligent Vehicles Symposium (IV), pages 760–
767. https: //doi.org/10.1109/IVS.2018.8500543
Blickfeld. Qb2 quick start manual and safety infor-
mation. (Manual No. Rev1.1-20230801). Mu-
nich. Retrieved January 11, 2024, from https:
//www.blickfeld.com/wp-content/uploads/2023/09/
Qb2-Quick-start-manual-and-safetyinformation.pdf
Bradski, G. The OpenCV library. Dr. Dobb’s Journal of
Software Tools.
Broedermann, T., Sakaridis, C., Fu, Y., and Van Gool, L.
(2024). Condition-aware multimodal fusion for robust
semantic perception of driving scenes. arXiv preprint
arXiv:2410.10791.
Burnett, K., Yoon, D. J., Wu, Y., Li, A. Z., Zhang, H., Lu,
S., Qian, J., Tseng, W.-K., Lambert, A., Leung, K. Y.,
et al. (2023). Boreas: A multi-season autonomous
driving dataset. The International Journal of Robotics
Research, 42(1-2):33–42.
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov,
A., and Zagoruyko, S. (2020). End-to-end object de-
tection with transformers. https://arxiv.org/abs/2005.
12872
Comission, I. E. CEI IEC 60529. Retrieved October
22, 2024, from https://webstore.iec.ch/en/publication/
2447
Corral-Soto, E. R. and Bingbing, L. (2020). Understand-
ing strengths and weaknesses of complementary sen-
sor modalities in early fusion for object detection.
2020 IEEE Intelligent Vehicles Symposium (IV), pages
1785–1792.
Cortinhas, S. (2023). Sports Balls - multiclass image
classification. Retrieved October 22, 2024, from
https://www.kaggle.com/datasets/samuelcortinhas/
sports-balls-multiclass-imageclassification/ data
Diaz-Ruiz, C. A., Xia, Y., You, Y., Nino, J., Chen, J., Mon-
ica, J., Chen, X., Luo, K., Wang, Y., Emond, M., et al.
(2022). Ithaca365: Dataset and driving perception un-
der repeated and challenging weather conditions. Pro-
ceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 21383–21392.
Dwyer, B., Nelson, J., Hansen, T., et al. (2024). Roboflow
(version 1.0). [Software]. https://roboflow.com
Fadl, I., Sch
¨
on, T., Behret, V., Brandmeier, T., Palme, F.,
and Helmer, T. (2025). Environment setup and model
benchmark of the mufora dataset. Accepted at VIS-
APP 2025.
Feng, D., Haase-Sch
¨
utz, C., Rosenbaum, L., Hertlein,
H., Glaeser, C., Timm, F., Wiesbeck, W., and Di-
etmayer, K. (2020). Deep multi-modal object de-
tection and semantic segmentation for autonomous
driving: Datasets, methods, and challenges. IEEE
Transactions on Intelligent Transportation Systems,
22(3):1341–1360.
Franchi, G., Yu, X., Bursuc, A., Tena, A., Kazmierczak,
R., Dubuisson, S., Aldea, E., and Filliat, D. (2022).
Muad: Multiple uncertainties for autonomous driving,
a benchmark for multiple uncertainty types and tasks.
arXiv preprint arXiv:2203.01437.
Google. How our cars drive - waymo one help. Retrieved
January 10, 2024, from https://support.google.com/
waymo/answer/9190838?hl=en
Gruber, T., Bijelic, M., Heide, F., Ritter, W., and Dietmayer,
K. (2019). Pixel-accurate depth evaluation in realistic
driving scenarios. 2019 International Conference on
3D Vision (3DV), pages 95–105.
Guerra, J. C. V., Khanam, Z., Ehsan, S., Stolkin, R.,
and McDonald-Maier, K. (2018). Weather classifi-
cation: A new multi-class dataset, data augmenta-
tion approach and comprehensive evaluations of con-
volutional neural networks. 2018 NASA/ESA Confer-
ence on Adaptive Hardware and Systems (AHS), pages
305–310.
Juliussen, E. Gadzooks! a worthy robo-taxi from zoox. Re-
trieved January 11, 2024, from https://www.eetimes.
com/gadzooks-a-worthy-robo-taxi-from-zoox/
Karvat, M. and Givigi, S. (2024). Adver-city: Open-
source multi-modal dataset for collaborative percep-
tion under adverse weather conditions. arXiv preprint
arXiv:2410.06380.
Kassir, A. and Peynot, T. (2010). Reliable automatic
camera-laser calibration. Proceedings of the 2010
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
630