loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Yifei Chen ; Lambert Schomaker and Marco Wiering

Affiliation: Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, 9747 AG Groningen, The Netherlands

Keyword(s): Reinforcement Learning, Q-learning, Function Approximation, Overestimation Bias.

Abstract: In Reinforcement learning, Q-learning is the best-known algorithm but it suffers from overestimation bias, which may lead to poor performance or unstable learning. In this paper, we present a novel analysis of this problem using various control tasks. For solving these tasks, Q-learning is combined with a multilayer perceptron (MLP), experience replay, and a target network. We focus our analysis on the effect of the learning rate when training the MLP. Furthermore, we examine if decaying the learning rate over time has advantages over static ones. Experiments have been performed using various maze-solving problems involving deterministic or stochastic transition functions and 2D or 3D grids and two Open-AI gym control problems. We conducted the same experiments with Double Q-learning using two MLPs with the same parameter settings, but without target networks. The results on the maze problems show that for Q-learning combined with the MLP, the overestimation occurs when higher learni ng rates are used and not when lower learning rates are used. The Double Q-learning variant becomes much less stable with higher learning rates and with low learning rates the overestimation bias may still occur. Overall, decaying learning rates clearly improves the performances of both Q-learning and Double Q-learning. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.138.181.90

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Chen, Y.; Schomaker, L. and Wiering, M. (2021). An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 107-118. DOI: 10.5220/0010227301070118

@conference{icaart21,
author={Yifei Chen. and Lambert Schomaker. and Marco Wiering.},
title={An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2021},
pages={107-118},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010227301070118},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning
SN - 978-989-758-484-8
IS - 2184-433X
AU - Chen, Y.
AU - Schomaker, L.
AU - Wiering, M.
PY - 2021
SP - 107
EP - 118
DO - 10.5220/0010227301070118
PB - SciTePress