Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F.,
Mueller, A., Grisel, O., Niculae, V., Prettenhofer, P.,
Gramfort, A., Grobler, J., Layton, R., VanderPlas, J.,
Joly, A., Holt, B., and Varoquaux, G. (2013). API de-
sign for machine learning software: experiences from
the scikit-learn project. In ECML PKDD Workshop:
Languages for Data Mining and Machine Learning,
pages 108–122.
Chasse, A., Sciarretta, A., and Chauvin, J. (2010). Online
optimal control of a parallel hybrid with costate adap-
tation rule. IFAC proceedings volumes, 43(7):99–104.
Christ, M., Braun, N., Neuffer, J., and Kempa-Liehr, A. W.
(2018). Time series feature extraction on basis of scal-
able hypothesis tests (tsfresh–a python package). Neu-
rocomputing, 307:72–77.
De Jager, B., Van Keulen, T., and Kessels, J. (2013). Opti-
mal control of hybrid vehicles. Springer.
Dieselnet (2022). Emission test cycles,
http://dieselnet.com/standards/cycles/index.php.
Elliott, H., Cristi, R., and Das, M. (1985). Global stability
of adaptive pole placement algorithms. IEEE transac-
tions on automatic control, 30(4):348–356.
Frank, B. et al. (2017). Hybrid systems, optimal control and
hybrid vehicles.
Gao, A., Deng, X., Zhang, M., and Fu, Z. (2017). De-
sign and validation of real-time optimal control with
ecms to minimize energy consumption for parallel hy-
brid electric vehicles. Mathematical Problems in En-
gineering.
Goos, J., Criens, C., and Witters, M. (2017). Automatic
evaluation and optimization of generic hybrid vehi-
cle topologies using dynamic programming. IFAC-
PapersOnLine, 50(1):10065–10071.
Ho, T. K. (1995). Random decision forests. In Proceedings
of 3rd international conference on document analysis
and recognition, volume 1, pages 278–282. IEEE.
Hussain, S., Ali, M. U., Park, G.-S., Nengroo, S. H.,
Khan, M. A., and Kim, H.-J. (2019). A real-time
bi-adaptive controller-based energy management sys-
tem for battery–supercapacitor hybrid electric vehi-
cles. Energies, 12(24):4662.
IPG Automotive (2022). Carmaker,
https://ipg-automotive.com/en/products-
solutions/software/carmaker.
Jain, A. K. and Dubes, R. C. (1988). Algorithms for clus-
tering data. Prentice-Hall, Inc.
Johannink, T., Bahl, S., Nair, A., Luo, J., Kumar, A.,
Loskyll, M., Ojea, J. A., Solowjow, E., and Levine,
S. (2019). Residual reinforcement learning for robot
control. In 2019 International Conference on Robotics
and Automation (ICRA), pages 6023–6029. IEEE.
Khan, S. G., Herrmann, G., Lewis, F. L., Pipe, T., and Mel-
huish, C. (2012). Reinforcement learning and opti-
mal adaptive control: An overview and implementa-
tion examples. Annual reviews in control, 36(1):42–
59.
Kreisselmeier, G. and Anderson, B. (1986). Robust model
reference adaptive control. IEEE Transactions on Au-
tomatic Control, 31(2):127–133.
Leith, D. J. and Leithead, W. E. (2000). Survey of gain-
scheduling analysis and design. International journal
of control, 73(11):1001–1025.
Leo, B. (2001). Random forests. Machine Learning,
45(1):5–32.
MathWorks (2022). Global optimization toolbox user’s
guide. The MathWorks.
Musardo, C., Rizzoni, G., Guezennec, Y., and Staccia, B.
(2005). A-ecms: An adaptive algorithm for hybrid
electric vehicle energy management. European Jour-
nal of Control, 11(4-5):509–524.
Onori, S. and Serrao, L. (2011). On adaptive-ecms strate-
gies for hybrid electric vehicles. In Proceedings of
the international scientific conference on hybrid and
electric vehicles, Malmaison, France, volume 67.
Powell, W. B. (2007). Approximate Dynamic Program-
ming: Solving the curses of dimensionality, volume
703. John Wiley & Sons.
Rezaei, A., Burl, J. B., Zhou, B., and Rezaei, M. (2017).
A new real-time optimal energy management strategy
for parallel hybrid electric vehicles. IEEE Transac-
tions on Control Systems Technology, 27(2):830–837.
Rummery, G. A. and Niranjan, M. (1994). On-line Q-
learning using connectionist systems, volume 37. Uni-
versity of Cambridge, Department of Engineering
Cambridge, UK.
Schreier, M. (2012). Modeling and adaptive control of a
quadrotor. In 2012 IEEE international conference on
mechatronics and automation, pages 383–390. IEEE.
Staessens, T., Lefebvre, T., and Crevecoeur, G. (2022).
Adaptive control of a mechatronic system using con-
strained residual reinforcement learning. IEEE Trans-
actions on Industrial Electronics, 69(10):10447–
10456.
Sutton, R. S. and Barto, A. G. (2018). Reinforcement learn-
ing: An introduction. MIT press.
Venkatesan, S. K., Beck, R., Bengea, S., Meskens, J., and
Depraetere, B. (2021). Design framework for context-
adaptive control methods applied to vehicle power
management. In 2021 21st International Conference
on Control, Automation and Systems (ICCAS), pages
584–591.
Wang, Y., Gao, F., and Doyle III, F. J. (2009). Survey on it-
erative learning control, repetitive control, and run-to-
run control. Journal of Process Control, 19(10):1589–
1600.
Watkins, C. J. and Dayan, P. (1992). Q-learning. Machine
learning, 8:279–292.
Yang, T., Sun, N., and Fang, Y. (2021). Adaptive fuzzy con-
trol for a class of mimo underactuated systems with
plant uncertainties and actuator deadzones: Design
and experiments. IEEE Transactions on Cybernetics,
52(8):8213–8226.
A Clustering-Based Approach for Adaptive Control Applied to a Hybrid Electric Vehicle
171