Authors:
Wenzel Pilar von Pilchau
1
;
Anthony Stein
2
and
Jörg Hähner
1
Affiliations:
1
Organic Computing Group, University of Augsburg, Am Technologiezentrum 8, Augsburg, Germany
;
2
Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Garbenstraße 9, Hohenheim, Germany
Keyword(s):
Experience Replay, Deep Q-Network, Deep Reinforcement Learning, Interpolation, Machine Learning.
Abstract:
The concept of Experience Replay is a crucial element in Deep Reinforcement Learning algorithms of the DQN family. The basic approach reuses stored experiences to, amongst other reasons, overcome the problem of catastrophic forgetting and as a result stabilize learning. However, only experiences that the learner observed in the past are used for updates. We anticipate that these experiences posses additional valuable information about the underlying problem that just needs to be extracted in the right way. To achieve this, we present the Interpolated Experience Replay technique that leverages stored experiences to create new, synthetic ones by means of interpolation. A previous proposed concept for discrete-state environments is extended to work in continuous problem spaces. We evaluate our approach on the MountainCar benchmark environment and demonstrate its promising potential.