Table 7: Mean and Standard Deviation values for testing the RL-agent with different lookahead offset.
Lookahead 60 20 40 80 100
Mean 0.0195 0.0164 0.0188 0.0391 0.0671
SD 0.0141 0.0090 0.0110 0.0190 0.0398
as an end-to-end learning approach to take over the
comprehensive steering control for an autonomous
vehicle. The proposed agent has been trained and
tested in the AutoMiny-Gazebo environment, which
implements a realistic model of the AutoMiny model
car. The target point was updating continuously in
each iteration during training. The aim was to en-
courage the agent to follow a pre-defined path with a
minimum cross-track error. A continuous state space
includes the agent position, orientation, speed, and
the target point coordinates, and the desired orienta-
tion was employed. Both cross-track error and ori-
entation error were combined in the reward function.
Updating the target points during training made the
agent gain more experience and drove a complete
loop around the track after 80 training loops. Al-
though the achieved results seem very encouraging,
more testing is necessary to validate the agent’s abil-
ity to follow different paths with more diverse route
profiles. Moreover, enhance the approach to include
computing the optimal velocity of the vehicle depend-
ing on the path and deploying the agent in a real plat-
form.
ACKNOWLEDGEMENTS
This material is based upon work supported by the
Bundesministerium fur Verkehr und digitale Infras-
truktur (BMVI) in Germany as part of the Shut-
tles&Co project, within the Automatisiertes, Vernet-
ztes Fahren (AVF) program.
REFERENCES
Alomari, K., Carrillo Mendoza, R., Sundermann, S.,
G
¨
ohring, D., and Rojas, R. (2020). Fuzzy logic-based
adaptive cruise control for autonomous model car. In
ROBOVIS.
Alomari, K., Sundermann, S., G
¨
ohring, D., and Rojas, R.
(in press). Design and experimental analysis of an
adaptive cruise control. Springer CCIS book series.
Brockman, G., Cheung, V., Pettersson, L., Schneider, J.,
Schulman, J., Tang, J., and Zaremba, W. (2016). Ope-
nai gym.
Calzolari, D., Sch
¨
urmann, B., and Althoff, M. (2017).
Comparison of trajectory tracking controllers for au-
tonomous vehicles. In 2017 IEEE 20th Interna-
tional Conference on Intelligent Transportation Sys-
tems (ITSC), pages 1–8.
Chan, C.-Y. (2017). Advancements, prospects, and impacts
of automated driving systems. International Journal
of Transportation Science and Technology, 6(3):208 –
216. Safer Road Infrastructure and Operation Man-
agement.
Hall, J., Rasmussen, C. E., and Maciejowski, J. (2011). Re-
inforcement learning with reference tracking control
in continuous state spaces. In 2011 50th IEEE Confer-
ence on Decision and Control and European Control
Conference, pages 6019–6024.
Jaritz, M., de Charette, R., Toromanoff, M., Perot, E., and
Nashashibi, F. (2018). End-to-end race driving with
deep reinforcement learning. CoRR, abs/1807.02371.
Kaelbling, L. P., Littman, M. L., and Moore, A. P. (1996).
Reinforcement learning: A survey. Journal of Artifi-
cial Intelligence Research, 4:237–285.
Kendall, A., Hawke, J., Janz, D., Mazur, P., Reda, D., Allen,
J., Lam, V., Bewley, A., and Shah, A. (2018). Learn-
ing to drive in a day. CoRR, abs/1807.00412.
Li, D., Zhao, D., Zhang, Q., and Chen, Y. (2019). Re-
inforcement learning and deep learning based lat-
eral control for autonomous driving [application
notes]. IEEE Computational Intelligence Magazine,
14(2):83–98.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T.,
Tassa, Y., Silver, D., and Wierstra, D. (2016). Con-
tinuous control with deep reinforcement learning. In
Bengio, Y. and LeCun, Y., editors, 4th International
Conference on Learning Representations, ICLR 2016,
San Juan, Puerto Rico, May 2-4, 2016, Conference
Track Proceedings.
Martinsen, A. B. and Lekkas, A. M. (2018). Curved
path following with deep reinforcement learning: Re-
sults from three vessel models. In OCEANS 2018
MTS/IEEE Charleston, pages 1–8.
Schmidt, M., B
¨
unger, S., and Chen, Y. (2019). Autominy-
simulator mit gazebo.
Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learn-
ing: An Introduction. The MIT Press, second edition.
Wang, S., Jia, D., and Weng, X. (2018). Deep rein-
forcement learning for autonomous driving. CoRR,
abs/1811.11329.
Path Following with Deep Reinforcement Learning for Autonomous Cars
181