Authors:
David Olivares
1
;
2
;
Pierre Fournier
2
;
Pavan Vasishta
1
and
Julien Marzat
2
Affiliations:
1
AKKODIS Research, 78280 Guyancourt, France
;
2
DTIS, ONERA, Université Paris-Saclay, 91123 Palaiseau, France
Keyword(s):
Reinforcement Learning, Unmanned Aerial Vehicle, Fixed-Wing Unmanned Aerial Vehicle, Attitude Control, Wind Disturbances.
Abstract:
This paper evaluates and compares the performance of model-free and model-based reinforcement learning for the attitude control of fixed-wing unmanned aerial vehicles using PID as a reference point. The comparison focuses on their ability to handle varying flight dynamics and wind disturbances in a simulated environment. Our results show that the Temporal Difference Model Predictive Control agent outperforms both the PID controller and other model-free reinforcement learning methods in terms of tracking accuracy and robustness over different reference difficulties, particularly in nonlinear flight regimes. Furthermore, we introduce actuation fluctuation as a key metric to assess energy efficiency and actuator wear, and we test two different approaches from the literature: action variation penalty and conditioning for action policy smoothness. We also evaluate all control methods when subject to stochastic turbulence and gusts separately, so as to measure their effects on tracking per
formance, observe their limitations and outline their implications on the Markov decision process formalism.
(More)