Authors:
Yasuhiro Onuki
;
Yasuyuki Tahara
;
Akihiko Ohsuga
and
Yuichi Sei
Affiliation:
The University of Electro-Communications, Tokyo, Japan
Keyword(s):
Deep Reinforcement Learning, Game Agents, Additional Rewards, VAE, NLE.
Abstract:
Deep reinforcement learning (DRL) has been widely used in agent research across various video games, demonstrating its effectiveness. Recently, there has been increasing interest in DRL research in complex environments such as Roguelike games. These games, while complex, offer fast execution speeds, making them useful as a testbeds for DRL agents. Among them, the game NetHack has gained of research attention. In this study, we aim to train a DRL agent for efficient learning with reduced training costs using the NetHack Learning Environment (NLE). We propose a method that incorporates a variational autoencoder (VAE). Additionally, since the rewards provided by the NLE are sparse, which complicates training, we also trained a DRL agent with additional rewards. As a result, although we expected that using the VAE would allow for more advantageous progress in the game, contrary to our expectations, it proves ineffective. Conversely, we find that the additional rewards are effective.