Table 3: Lane Keep Time: The report time is in second and
rounded up for keeping two decimals.
Test time Model 3 NVIDIA Model
1 582.3 450.6
2 624.5 472.4
3 541.7 442.3
4 693.1 443.7
5 599.2 458.3
6 612.4 451.4
7 643.8 454.2
8 605.9 451.8
9 564.1 484.9
10 642.4 438.4
11 656.4 460.4
12 604.5 464.1
13 587.1 434.8
14 627.2 469.9
15 623.2 457.5
16 674.4 428.7
17 675.7 449.4
18 611.8 456.3
19 604.2 461.6
20 572.8 443.4
Average 617.3 453.7
Table 4: Paired t-test for Model 3 and NVIDIA Model.
Variable Mean Statistic p-value
Model 3 617.3 16.63 8.84e-11<0.0001
NVIDIA Model 453.7
REFERENCES
Bluche, T., Kermorvant, C., and Louradour, J. (2015).
Where to apply dropout in recurrent neural networks
for handwriting recognition? In Document Analy-
sis and Recognition (ICDAR), 2015 13th International
Conference on, pages 681–685. IEEE.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B.,
Flepp, B., Goyal, P., Jackel, L. D., Monfort, M.,
Muller, U., Zhang, J., et al. (2016). End to end learn-
ing for self-driving cars. arXiv, Cornell University:1
604.07316.
Chen, C., Seff, A., Kornhauser, A., and Xiao, J. (2015).
Deepdriving: Learning affordance for direct percep-
tion in autonomous driving. pages 2722 – 2730.
Chen, T., Lu, S., and Fan, J. (2018). S-cnn: Subcategory-
aware convolutional networks for object detection.
IEEE transactions on pattern analysis and machine
intelligence, 40(10):2522–2528.
Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2015).
Fast and accurate deep network learning by exponen-
tial linear units (elus). arXiv, Cornell University:
1511.07289.
Codevilla, F., Miiller, M., L
´
opez, A., Koltun, V., and Doso-
vitskiy, A. (2018). End-to-end driving via conditional
imitation learning. In 2018 IEEE International Con-
ference on Robotics and Automation (ICRA), pages 1–
9. IEEE.
Gal, Y. and Ghahramani, Z. (2016). A theoretically
grounded application of dropout in recurrent neural
networks. In Advances in neural information process-
ing systems, pages 1019–1027.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014).
Rich feature hierarchies for accurate object detec-
tion and semantic segmentation. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 580–587.
Klein, A., Christiansen, E., Murphy, K., and Hutter, F.
(2018). Towards reproducible neural architecture and
hyperparameter search.
Ma, C., Zhu, Z., Ye, J., Yang, J., Pei, J., Xu, S., Zhou, R.,
Yu, C., Mo, F., Wen, B., et al. (2017). Deeprt: deep
learning for peptide retention time prediction in pro-
teomics. arXiv, Cornell University: 1705.05368.
Muller, U., Ben, J., Cosatto, E., Flepp, B., and Cun, Y. L.
(2006). Off-road obstacle avoidance through end-to-
end learning. In Advances in neural information pro-
cessing systems, pages 739–746.
Pfeiffer, M., Schaeuble, M., Nieto, J., Siegwart, R., and
Cadena, C. (2017). From perception to decision: A
data-driven approach to end-to-end motion planning
for autonomous ground robots. In 2017 ieee interna-
tional conference on robotics and automation (icra),
pages 1527–1533. IEEE.
Pomerleua, D. (1989). Alvinn: an autonomous land vehicle
in a neural network. Advances in neural information
processing systems, pages 305–313.
Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster
r-cnn: Towards real-time object detection with region
proposal networks. In Advances in neural information
processing systems, pages 91–99.
R
¨
odel, C., Stadler, S., Meschtscherjakov, A., and Tscheligi,
M. (2014). Towards autonomous cars: the effect of
autonomy levels on acceptance and user experience.
Proceedings of the 6th International Conference on
Automotive User Interfaces and Interactive Vehicular
Applications, pages 1–8.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. (2014). Dropout: a simple way
to prevent neural networks from overfitting. The Jour-
nal of Machine Learning Research, 15(1):1929–1958.
Su, W., Chen, L., Wu, M., Zhou, M., Liu, Z., and Cao, W.
(2017). Nesterov accelerated gradient descent-based
convolution neural network with dropout for facial ex-
pression recognition. In Control Conference (ASCC),
2017 11th Asian, pages 1063–1068. IEEE.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,
Anguelov, D., Erhan, D., Vanhoucke, V., and Rabi-
novich, A. (2015). Going deeper with convolutions.
In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 1–9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wo-
jna, Z. (2016). Rethinking the inception architecture
for computer vision. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2818–2826.
Xu, H., Gao, Y., Yu, F., and Darrell, T. (2017). End-to-
end learning of driving models from large-scale video
datasets. arXiv, Cornell University.
End-to-end Learning Approach for Autonomous Driving: A Convolutional Neural Network Model
839