Transfer Learning in Deep Reinforcement Learning: Actor-Critic Model Reuse for Changed State-Action Space
Feline Malin Barg, Eric Veith, Lasse Hammer
2025
Abstract
Deep Reinforcement Learning (DRL) is a leading method for control in high-dimensional environments, excelling in complex tasks. However, adapting DRL agents to sudden changes, such as reduced sensors or actuators, poses challenges to learning stability and efficiency. While Transfer Learning (TL) can reduce retraining time, its application in environments with sudden state-action space modifications remains underex-plored. Resilient, time-efficient strategies for adapting DRL agents to structural changes in state-action space dimension are still needed. This paper introduces Actor-Critic Model Reuse (ACMR), a novel TL-based algorithm for tasks with altered state-action spaces. ACMR enables agents to leverage pre-trained models to speed up learning in modified environments, using hidden layer reuse, layer freezing, and network layer expansion. The results show that ACMR significantly reduces adaptation times while maintaining strong performance with changed state-action space dimensions. The study also provides insights into adaptation performance across different ACMR configurations.
DownloadPaper Citation
in Harvard Style
Barg F., Veith E. and Hammer L. (2025). Transfer Learning in Deep Reinforcement Learning: Actor-Critic Model Reuse for Changed State-Action Space. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-737-5, SciTePress, pages 682-692. DOI: 10.5220/0013304900003890
in Bibtex Style
@conference{icaart25,
author={Feline Barg and Eric Veith and Lasse Hammer},
title={Transfer Learning in Deep Reinforcement Learning: Actor-Critic Model Reuse for Changed State-Action Space},
booktitle={Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2025},
pages={682-692},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013304900003890},
isbn={978-989-758-737-5},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Transfer Learning in Deep Reinforcement Learning: Actor-Critic Model Reuse for Changed State-Action Space
SN - 978-989-758-737-5
AU - Barg F.
AU - Veith E.
AU - Hammer L.
PY - 2025
SP - 682
EP - 692
DO - 10.5220/0013304900003890
PB - SciTePress