![](bgb.png)
The results indicate that while OFT provides a
framework for accurate line prediction when it de-
tects them, its overall line feature detection rate needs
enhancement. On the other hand, the BEV FastLine
demonstrates an improved balance between detecting
lines and maintaining a lower false positive rate.
These observations warrant further investigation
into the underlying factors contributing to the perfor-
mance disparities. It is conceivable that the intrinsic
characteristics of the LSS method, such as its focus
on learning from a comprehensive point cloud, may
afford it a broader detection capability. In contrast,
the OFT method’s dependence on transforming image
features to an orthographic view might limit its sensi-
tivity to certain types of line markings or variances in
environmental conditions.
7 CONCLUSIONS
In conclusion, our proposed methodologies intro-
duced in this study provide a significant advance-
ment in line landmark detection for autonomous park-
ing systems. The empirical results underscore the
precision of both methods, with the BEV FastLine
approach demonstrating a commendable balance be-
tween precision and recall. This balance is crucial for
real-world applications where accurate line detection
is instrumental in safe and reliable vehicle navigation.
The OFT + SingleShot method, also superior in
precision, when compared to baseline OFT based seg-
mentation model. The current work lays the founda-
tion for future enhancements in detection rates and
suggests that a hybrid approach may yield a more op-
timal solution. Such improvements are vital for nav-
igating complex environments and ensuring compre-
hensive line detection coverage.
The Fast Splatting technique introduced in our
work requires 4X less computational time when com-
pared to started cumsum operation and also suitable
for neural engines on the embedded systems.
Finally, the work delineated in this paper signif-
icantly enriches the evolving domain of autonomous
vehicle technologies. By highlighting the strengths
and areas for development in line landmark detection,
it steers future efforts towards creating more sophisti-
cated and robust systems. These systems will be es-
sential in realizing the full potential of autonomous
vehicles, ensuring safety, efficiency, and reliability in
automated parking and beyond.
ACKNOWLEDGEMENTS
The authors thank Valeo Vision System for their sup-
port, resources, and the opportunity to contribute to
the broader research community with this work.
REFERENCES
Canny, J. (1986). A computational approach to edge de-
tection. IEEE Transactions on pattern analysis and
machine intelligence, (6):679–698.
Efrat, N., Bluvstein, M., Oron, S., Levi, D., Garnett, N., and
Shlomo, B. E. (2020). 3d-lanenet+: Anchor free lane
detection using a semi-local representation.
Gao, S., Wan, J., Ping, Y., Zhang, X., Dong, S., Yang, Y.,
Ning, H., Li, J., and Guo, Y. (2022). Pose refinement
with joint optimization of visual points and lines. In
2022 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 2888–2894.
IEEE.
Garnett, N., Cohen, R., Pe’er, T., Lahav, R., and Levi, D.
(2019). 3d-lanenet: End-to-end 3d multiple lane de-
tection.
Hough, P. V. (1962). Method and means for recognizing
complex patterns. US Patent 3,069,654.
Huang, K., Wang, Y., Zhou, Z., Ding, T., Gao, S., and Ma,
Y. (2018). Learning to parse wireframes in images of
man-made environments. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 626–635.
Huang, S., Qin, F., Xiong, P., Ding, N., He, Y., and Liu,
X. (2020). Tp-lsd: Tri-points based line segment de-
tector. In European Conference on Computer Vision,
pages 770–785. Springer.
Kumar, V. R., Eising, C., Witt, C., and Yogamani, S. (2023).
Surround-view fisheye camera perception for auto-
mated driving: Overview, survey & challenges. IEEE
Transactions on Intelligent Transportation Systems.
Kumar, V. R., Hiremath, S. A., Bach, M., Milz, S., Witt,
C., Pinard, C., Yogamani, S., and M
¨
ader, P. (2020).
Fisheyedistancenet: Self-supervised scale-aware dis-
tance estimation using monocular fisheye camera for
autonomous driving. In 2020 IEEE international con-
ference on robotics and automation (ICRA), pages
574–581. IEEE.
Kumar, V. R., Yogamani, S., Rashed, H., Sitsu, G., Witt,
C., Leang, I., Milz, S., and M
¨
ader, P. (2021). Om-
nidet: Surround view cameras based multi-task visual
perception network for autonomous driving. IEEE
Robotics and Automation Letters, 6(2):2830–2837.
Lee, Y. and Park, M. (2021). Around-view-monitoring-
based automatic parking system using parking line de-
tection. Applied Sciences, 11(24):11905.
Li, H., Yu, H., Wang, J., Yang, W., Yu, L., and Scherer,
S. (2021). Ulsd: Unified line segment detection
across pinhole, fisheye, and spherical cameras. IS-
PRS Journal of Photogrammetry and Remote Sensing,
178:187–202.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
230