loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Juan M. Montoya 1 ; Imant Daunhawer 2 ; Julia E. Vogt 2 and Marco Wiering 3

Affiliations: 1 Department of Computer Science, University of Konstanz, Germany ; 2 Department of Computer Science, ETH Zurich, Switzerland ; 3 Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, The Netherlands

Keyword(s): Deep Reinforcement Learning, State Representation Learning, Variational Autoencoders, Constrastive Learning.

Abstract: In the quest for efficient and robust learning methods, combining unsupervised state representation learning and reinforcement learning (RL) could offer advantages for scaling RL algorithms by providing the models with a useful inductive bias. For achieving this, an encoder is trained in an unsupervised manner with two state representation methods, a variational autoencoder and a contrastive estimator. The learned features are then fed to the actor-critic RL algorithm Proximal Policy Optimization (PPO) to learn a policy for playing Open AI’s car racing environment. Hence, such procedure permits to decouple state representations from RL-controllers. For the integration of RL with unsupervised learning, we explore various designs for variational autoencoders and contrastive learning. The proposed method is compared to a deep network trained directly on pixel inputs with PPO. The results show that the proposed method performs slightly worse than directly learning from pixel inputs; howe ver, it has a more stable learning curve, a substantial reduction of the buffer size, and requires optimizing 88% fewer parameters. These results indicate that the use of pre-trained state representations has several benefits for solving RL tasks. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.137.159.17

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Montoya, J.; Daunhawer, I.; Vogt, J. and Wiering, M. (2021). Decoupling State Representation Methods from Reinforcement Learning in Car Racing. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 752-759. DOI: 10.5220/0010237507520759

@conference{icaart21,
author={Juan M. Montoya. and Imant Daunhawer. and Julia E. Vogt. and Marco Wiering.},
title={Decoupling State Representation Methods from Reinforcement Learning in Car Racing},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2021},
pages={752-759},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010237507520759},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Decoupling State Representation Methods from Reinforcement Learning in Car Racing
SN - 978-989-758-484-8
IS - 2184-433X
AU - Montoya, J.
AU - Daunhawer, I.
AU - Vogt, J.
AU - Wiering, M.
PY - 2021
SP - 752
EP - 759
DO - 10.5220/0010237507520759
PB - SciTePress