and RL steering methods for path tracking. The goal
was to determine whether RL speed control could
enhance and balance tracking accuracy (safety) and
speed (time efficiency) compared to simpler model-
based speed controllers. We also evaluated whether
there was any benefit in performing speed control si-
multaneously with steering control, rather than se-
quentially, in our scenarios. Results revealed several
key insights depending on the steering controls used:
• Model-Based Steering: A and VC demonstrated
significant improvements in reducing lateral er-
rors when combined with Pure Pursuit (PP) and
EBSF, but at the cost of reduced speed. The pre-
dictability of the model-based steering was crucial
for the RL speed agents to perform effectively.
• RL Steering: Sequentially with RL steering,
methods (A,AC,VC) underperformed compared
to A
re f
. The predictability of A
re f
aided RL steer-
ing, but this advantage was lost with RL speed
controls. Similarly, learning of simultaneous con-
trols proved challenging, indicating the need for
further refinement for effective joint control.
In summary, while RL speed controllers can en-
hance safety and reduce lateral errors when combined
with model-based steering, their benefits may be lim-
ited for simple tracking scenarios. However, using
RL to learn acceleration control remains interesting,
especially when requiring an additional safety layer
on poorly performing steering controllers. In future
work, we will focus on real-world testing and explore
fine-tuning the agent to address sim-to-real issues.
REFERENCES
Attia, R., Orjuela, R., and Basset, M. (2014). Combined
longitudinal and lateral control for automated vehicle
guidance. Vehicle System Dynamics.
Cai, P., Mei, X., Tai, L., Sun, Y., and Liu, M. (2020). High-
speed autonomous drifting with deep reinforcement
learning. IEEE-RAL.
Chemin, J., Hill, A., Lucet, E., and Mayoue, A. (2024).
A study of reinforcement learning techniques for path
tracking in autonomous vehicles. In IEEE-IV.
Cheng, Z., Chow, M.-Y., Jung, D., and Jeon, J. (2017).
A big data based deep learning approach for vehicle
speed prediction. In IEEE-ISIE.
Coulter, C. (1992). Implementation of the pure pursuit path
tracking algorithm.
Devineau, G., Polack, P., Altch
´
e, F., and Moutarde, F.
(2018). Coupled longitudinal and lateral control of
a vehicle using deep learning. CoRR.
Faust, A., Ramirez, O., Fiser, M., Oslund, K., Francis,
A. G., Davidson, J., and Tapia, L. (2017). PRM-
RL: long-range robotic navigation tasks by combining
reinforcement learning and sampling-based planning.
CoRR.
Gangopadhyay, B., Dasgupta, P., and Dey, S. (2022). Safe
and stable rl (s2rl) driving policies using control bar-
rier and control lyapunov functions. IEEE-IV.
Gauthier-Clerc, F., Hill, A., Laneurit, J., Lenain, R., and
Lucet, E. (2021). Online velocity fluctuation of off-
road wheeled mobile robots: A reinforcement learn-
ing approach. IEEE-ICRA.
Geng, X., Liang, H., Xu, H., Yu, B., and Zhu, M. (2016).
Human-driver speed profile modeling for autonomous
vehicle’s velocity strategy on curvy paths. In IEEE-IV.
Hill, A. (2022). Adaptation du comportement sensori-
moteur de robots mobiles en milieux complexes. Phd
thesis, Universite - Clermont-Ferrand.
Hoffmann, G. M., Tomlin, C. J., Montemerlo, M., and
Thrun, S. (2007). Autonomous automobile trajectory
tracking for off-road driving: Controller design, ex-
perimental validation and racing. In American Control
Conference.
Kendall, A., Hawke, J., Janz, D., Mazur, P., Reda, D., Allen,
J. M., Lam, V. D., Bewley, A., and Shah, A. (2019).
Learning to drive in a day. In IEEE-ICRA.
Lenain, R., Thuilot, B., Cariou, C., and Martinet, P. (2021).
Accurate autonomous navigation strategy dedicated to
the storage of buses in a bus center. Robotics and Au-
tonomous Systems.
Macadam, C. (2003). Understanding and modeling the hu-
man driver. Vehicle system dynamics.
Normey-Rico, J. E., Alcal
´
a, I., G
´
omez-Ortega, J., and Ca-
macho, E. F. (2001). Mobile robot path tracking using
a robust pid controller. Control Engineering Practice.
Paden, B.,
ˇ
C
´
ap, M., Yong, S. Z., Yershov, D., and Frazzoli,
E. (2016). A survey of motion planning and control
techniques for self-driving urban vehicles. IEEE-IV.
Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus,
M., and Dormann, N. (2021). Stable-baselines3: Reli-
able reinforcement learning implementations. Journal
of Machine Learning Research.
Serna, C. G. and Ruichek, Y. (2017). Dynamic speed adap-
tation for path tracking based on curvature informa-
tion and speed limits. Sensors, 17(6):1383.
Stano, P., Montanaro, U., Tavernini, D., Tufo, M., Fiengo,
G., Novella, L., and Sorniotti, A. (2022). Model pre-
dictive path tracking control for automated road vehi-
cles: A review. Annual Reviews in Control.
Vollenweider, E., Bjelonic, M., Klemm, V., Rudin, N., Lee,
J., and Hutter, M. (2023). Advanced skills through
multiple adversarial motion priors in reinforcement
learning. In IEEE-ICRA.
Weber, T. and Gerdes, J. C. (2023). Modeling and control
for dynamic drifting trajectories. IEEE-IV.
Xu, Z., Liu, B., Xiao, X., Nair, A., and Stone, P. (2023).
Benchmarking reinforcement learning techniques for
autonomous navigation. In IEEE-ICRA.
Zhou, H., Gao, J., and Liu, H. (2021). Vehicle speed
preview control with road curvature information for
safety and comfort promotion. Proceedings of the In-
stitution of Mechanical Engineers.
Does Path Tracking Benefit from Sequential or Simultaneous RL Speed Controls?
281