5 CONCLUSION
This paper presents a lane detection technique based
on deep learning models with the use of temporal in-
formation. We improve the convolutional methods for
the neural network architecture 3D ResNet50. The
main contribution of this work consists of two parts,
the first is incorporating the time axis with PINet and
PolyLaneNet, and the other is the improvement on the
3D ResNet50 network model. In the experiments, the
accuracy is greatly improved for the applications to a
variety of different complex scenes.
ACKNOWLEDGMENTS
This work was financially/partially supported by the
Advanced Institute of Manufacturing with High-tech
Innovations (AIM-HI) from The Featured Areas Re-
search Center Program within the framework of the
Higher Education Sprout Project by the Ministry of
Education (MOE) in Taiwan, the Ministry of Science
and Technology of Taiwan under Grant MOST 106-
2221-E-194-004 and Create Electronic Optical Co.,
LTD, Taiwan.
REFERENCES
Aly, M. (2008). Real time detection of lane markers in ur-
ban streets. In 2008 IEEE Intelligent Vehicles Sympo-
sium, pages 7–12. IEEE.
Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017).
Segnet: A deep convolutional encoder-decoder ar-
chitecture for image segmentation. IEEE transac-
tions on pattern analysis and machine intelligence,
39(12):2481–2495.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and
Yuille, A. L. (2018). Deeplab: Semantic image seg-
mentation with deep convolutional nets, atrous con-
volution, and fully connected crfs. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
40(4):834–848.
Chen, Z., Liu, Q., and Lian, C. (2019). Pointlanenet: Ef-
ficient end-to-end cnns for accurate real-time lane de-
tection. In 2019 IEEE Intelligent Vehicles Symposium
(IV), pages 2563–2568. IEEE.
Ghafoorian, M., Nugteren, C., Baka, N., Booij, O., and Hof-
mann, M. (2018). El-gan: Embedding loss driven gen-
erative adversarial networks for lane detection. In Pro-
ceedings of the European Conference on Computer Vi-
sion (ECCV) Workshops, pages 0–0.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hou, Y., Ma, Z., Liu, C., and Loy, C. C. (2019). Learning
lightweight lane detection cnns by self attention distil-
lation. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, pages 1013–1021.
Kai, Z. (2017). Tusimple datasets.
https://github.com/TuSimple/tusimple-benchmark.
Ko, Y., Jun, J., Ko, D., and Jeon, M. (2020). Key points
estimation and point instance segmentation approach
for lane detection. arXiv preprint arXiv:2002.06604.
Li, X., Li, J., Hu, X., and Yang, J. (2019). Line-cnn: End-
to-end traffic line detection with line proposal unit.
IEEE Transactions on Intelligent Transportation Sys-
tems, 21(1):248–258.
Lin, H. Y., Dai, J. M., Wu, L. T., and Chen, L. Q. (2020).
A vision based driver assistance system with for-
ward collision and overtaking detection. Sensors,
20(18):100–109.
Lo, S.-Y., Hang, H.-M., Chan, S.-W., and Lin, J.-J. (2019).
Multi-class lane semantic segmentation using efficient
convolutional networks. In 2019 IEEE 21st Inter-
national Workshop on Multimedia Signal Processing
(MMSP), pages 1–6. IEEE.
Neven, D., De Brabandere, B., Georgoulis, S., Proesmans,
M., and Van Gool, L. (2018). Towards end-to-end
lane detection: an instance segmentation approach. In
2018 IEEE intelligent vehicles symposium (IV), pages
286–291. IEEE.
Pan, X., Shi, J., Luo, P., Wang, X., and Tang, X. (2018).
Spatial as deep: Spatial cnn for traffic scene under-
standing. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 32.
Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K.,
and Woo, W.-c. (2015). Convolutional lstm network:
A machine learning approach for precipitation now-
casting. arXiv preprint arXiv:1506.04214.
Tabelini, L., Berriel, R., Paixao, T. M., Badue, C.,
De Souza, A. F., and Oliveira-Santos, T. (2021). Poly-
lanenet: Lane estimation via deep polynomial regres-
sion. In 2020 25th International Conference on Pat-
tern Recognition (ICPR), pages 6150–6156. IEEE.
Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., and
Paluri, M. (2018). A closer look at spatiotemporal
convolutions for action recognition. In Proceedings of
the IEEE conference on Computer Vision and Pattern
Recognition, pages 6450–6459.
Xu, B., Wang, N., Chen, T., and Li, M. (2015). Empiri-
cal evaluation of rectified activations in convolutional
network. arXiv preprint arXiv:1505.00853.
Yuan, W., Yang, M., Li, H., Wang, C., and Wang, B. (2018).
End-to-end learning for high-precision lane keeping
via multi-state model. CAAI Transactions on Intelli-
gence Technology, 3(4):185–190.
Zou, Q., Jiang, H., Dai, Q., Yue, Y., Chen, L., and Wang, Q.
(2019). Robust lane detection from continuous driving
scenes using deep neural networks. IEEE transactions
on vehicular technology, 69(1):41–54.
A Vision-based Lane Detection Technique using Deep Neural Networks and Temporal Information
179