Authors:
Fernando Fradique Duarte
1
;
Nuno Lau
2
;
Artur Pereira
2
and
Luís Reis
3
Affiliations:
1
Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, Aveiro, Portugal
;
2
Department of Electronics, Telecommunications and Informatics, University of Aveiro, Aveiro, Portugal
;
3
Faculty of Engineering, Department of Informatics Engineering, University of Porto, Porto, Portugal
Keyword(s):
Deep Reinforcement Learning, Multi-Head Attention, Advantage Actor-Critic.
Abstract:
Deep Learning agents are known to be very sensitive to their parameterization values. Attention-based Deep Reinforcement Learning agents further complicate this issue due to the additional parameterization associated to the computation of their attention function. One example of this concerns the number of attention heads to use when dealing with multi-head attention-based agents. Usually, these hyperparameters are set manually, which may be neither optimal nor efficient. This work addresses the issue of choosing the appropriate number of attention heads dynamically, by endowing the agent with a policy πh trained with policy gradient. At each timestep of agent-environment interaction, πh is responsible for choosing the most suitable number of attention heads according to the contextual memory of the agent. This dynamic parameterization is compared to a static parameterization in terms of performance. The role of πh is further assessed by providing additional analysis concerning the d
istribution of the number of attention heads throughout the training procedure and the course of the game. The Atari 2600 videogame benchmark was used to perform and validate all the experiments.
(More)