loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Theodora-Augustina Drăgan 1 ; Maureen Monnet 1 ; Christian Mendl 2 ; 3 and Jeanette Lorenz 1

Affiliations: 1 Fraunhofer Institute for Cognitive Systems IKS, Munich, Germany ; 2 Technical University of Munich, Department of Informatics, Boltzmannstraße 3, 85748 Garching, Germany ; 3 Technical University of Munich, Institute for Advanced Study, Lichtenbergstraße 2a, 85748 Garching, Germany

Keyword(s): Quantum Reinforcement Learning, Proximal Policy Optimization, Parametrizable Quantum Circuits, Frozen Lake, Expressibility, Entanglement Capability, Effective Dimension.

Abstract: Quantum reinforcement learning (QRL) models augment classical reinforcement learning schemes with quantum-enhanced kernels. Different proposals on how to construct such models empirically show a promising performance. In particular, these models might offer a reduced parameter count and shorter times to reach a solution than classical models. It is however presently unclear how these quantum-enhanced kernels as subroutines within a reinforcement learning pipeline need to be constructed to indeed result in an improved performance in comparison to classical models. In this work we exactly address this question. First, we propose a hybrid quantum-classical reinforcement learning model that solves a slippery stochastic frozen lake, an environment considerably more difficult than the deterministic frozen lake. Secondly, different quantum architectures are studied as options for this hybrid quantum-classical reinforcement learning model, all of them well-motivated by the literature. They a ll show very promising performances with respect to similar classical variants. We further characterize these choices by metrics that are relevant to benchmark the power of quantum circuits, such as the entanglement capability, the expressibility, and the information density of the circuits. However, we find that these typical metrics do not directly predict the performance of a QRL model. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 13.58.239.56

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Drăgan, T. ; Monnet, M. ; Mendl, C. and Lorenz, J. (2023). Quantum Reinforcement Learning for Solving a Stochastic Frozen Lake Environment and the Impact of Quantum Architecture Choices. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-623-1; ISSN 2184-433X, SciTePress, pages 199-210. DOI: 10.5220/0011673400003393

@conference{icaart23,
author={Theodora{-}Augustina Drăgan and Maureen Monnet and Christian Mendl and Jeanette Lorenz},
title={Quantum Reinforcement Learning for Solving a Stochastic Frozen Lake Environment and the Impact of Quantum Architecture Choices},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2023},
pages={199-210},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011673400003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Quantum Reinforcement Learning for Solving a Stochastic Frozen Lake Environment and the Impact of Quantum Architecture Choices
SN - 978-989-758-623-1
IS - 2184-433X
AU - Drăgan, T.
AU - Monnet, M.
AU - Mendl, C.
AU - Lorenz, J.
PY - 2023
SP - 199
EP - 210
DO - 10.5220/0011673400003393
PB - SciTePress