Model deployment: The model is then
implemented in the vehicle’s onboard
computer to ensure real-time execution for
predicting control actions based on current
sensor inputs.
For transfer learning (TL), the idea of combining
BC and RL seems relevant. As an example, BC can
be used to mimic the existing CA, and DRL can be
used to achieve a generalization of the behavior of the
cloner through TL to introduce adaptability.
However, a study of the limitations of BC should be
conducted to justify the use of TL as the automotive
field is so restrictive in terms of computational power.
6 CONCLUSIONS
In this paper, three learning approaches to implement
control allocation have been introduced. Control
orchestration needs for chassis systems have been
presented and the limitations of optimization-based
coordination have been discussed. The main results
obtained from the state of the art present a big
motivation to satisfy our needs in terms of
generalization, prediction, and fidelity.
The coming work will concern an in-depth
discussion of the main missing points and doubts that
have to be revisited in our future studies relating to
imitation learning in general, and learning of CA
more specifically. The example taken in this article
will be used to see the reliability of imitation learning
in CA problems.
REFERENCES
Vries, P., & Van Kampen, E.-J. (2019). Reinforcement
learning-based control allocation for the innovative
control effectors aircraft. In Proceedings of the AIAA
Scitech 2019 Forum. https://doi.org/10.2514/6.2019-
0144.
Raghunathan, R. N., Skulstad, R., Li, G., & Zhang, H.
(2023). Design of constraints for a neural network
based thrust allocator for dynamic ship positioning. In
IECON 2023 - 49th Annual Conference of the IEEE
Industrial Electronics Society (pp. 1–6).
https://doi.org/10.1109/IECON51785.2023.10312100.
Khan, H. Z. I., Mobeen, S., Rajput, J., & Riaz, J. (2024).
Nonlinear control allocation: A learning-based
approach. arXiv. https://arxiv.org/abs/2201.06180.
Skulstad, R., Li, G., Fossen, T. I., & Zhang, H. (2023).
Constrained control allocation for dynamic ship
positioning using deep neural network. Ocean
Engineering, 279, 114434. https://doi.org/10.1016/j.
oceaneng.2023.114434.
Wu, K. C., & Litt, J. S. (2023). Reinforcement learning
approach to flight control allocation with distributed
electric propulsion (NASA Technical Memorandum
No. 20230014863). National Aeronautics and Space
Administration, Glenn Research Center.
https://ntrs.nasa.gov/.
Skrickij, V., Kojis, P., Šabanovič, E., Shyrokau, B., &
Ivanov, V. (2024). Review of integrated chassis control
techniques for automated ground vehicles. Sensors,
24(2), 600. https://doi.org/10.3390/s24020600.
Hua, J., Zeng, L., Li, G., & Ju, Z. (2021). Learning for a
robot: Deep reinforcement learning, imitation learning,
transfer learning. Sensors, 21(4), 1278. https://doi.
org/10.3390/s21041278.
Zare, M., Kebria, P. M., Khosravi, A., & Nahavandi, S.
(2023). A survey of imitation learning: Algorithms,
recent developments, and challenges. arXiv.
https://arxiv.org/abs/2309.02473.
Johansen, T. A., & Fossen, T. I. (2013). Control
allocation—A survey. Automatica, 49(5), 1087–1103.
https://doi.org/10.1016/j.automatica.2013.01.035.
Kissai, M. Optimal Coordination of Chassis Systems for
Vehicle Motion Control. Automatic Control
Engineering. Universite´ Paris Saclay (COmUE), 2019.
English. 〈NNT : 2019SACLY004〉.
Norouzi, A., Heidarifar, H., Borhan, H., Shahbakhti, M., &
Koch, C. R. (2023). Integrating machine learning and
model predictive control for automotive applications: A
review and future directions. Engineering Applications
of Artificial Intelligence, 120, 105878. https://doi.
org/10.1016/j.engappai.2023.105878.
Kissai, M., Monsuez, B., Mouton, X., Martinez, D., &
Tapus, A. (2019). Model predictive control allocation
of systems with different dynamics. In 2019 IEEE
Intelligent Transportation Systems Conference (ITSC)
(pp.4170–4177). https://doi.org/10.1109/ITSC.2019.
8917438.
Kissai, M., Monsuez, B., & Tapus, A. (2017, June). Current
and future architectures for integrated vehicle dynamics
control.
Yuan, Y., Chen, L., Wu, H., & Li, L. (2022). Advanced
agricultural disease image recognition technologies: A
review. Information Processing in Agriculture, 9(1),
48–59. https://doi.org/10.1016/j.inpa.2021.01.003.
Latif, S., Cuayáhuitl, H., Pervez, F., Shamshad, F., Ali, H.
S., & Cambria, E. (2021). A survey on deep
reinforcement learning for audio-based applications.
arXiv. https://arxiv.org/abs/2101.00240.
Dey, S., Marzullo, T., Zhang, X., & Henze, G. (2023).
Reinforcement learning building control approach
harnessing imitation learning. Energy and AI, 14,
100255. https://doi.org/10.1016/j.egyai.2023.100255.
Manuel Davila Delgado, J., & Oyedele, L. (2022). Robotics
in construction: A critical review of the reinforcement
learning and imitation learning paradigms. Advanced
Engineering Informatics, 54, 101787. https://doi.
org/10.1016/j.aei.2022.101787.