loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Gabriel Leuenberger and Marco A. Wiering

Affiliation: University of Groningen, Netherlands

Keyword(s): Reinforcement Learning, Continuous Actions, Multi-Layer Perceptrons, Computer Games, Actor-Critic Methods.

Related Ontology Subjects/Areas/Topics: Agents ; Artificial Intelligence ; Autonomous Systems ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Symbolic Systems ; Theory and Methods

Abstract: Reinforcement learning agents with artificial neural networks have previously been shown to acquire human level dexterity in discrete video game environments where only the current state of the game and a reward are given at each time step. A harder problem than discrete environments is posed by continuous environments where the states, observations, and actions are continuous, which is what this paper focuses on. The algorithm called the Continuous Actor-Critic Learning Automaton (CACLA) is applied to a 2D aerial combat simulation environment, which consists of continuous state and action spaces. The Actor and the Critic both employ multilayer perceptrons. For our game environment it is shown: 1) The exploration of CACLA’s action space strongly improves when Gaussian noise is replaced by an Ornstein-Uhlenbeck process. 2) A novel Monte Carlo variant of CACLA is introduced which turns out to be inferior to the original CACLA. 3) From the latter new insights are obtained that lead to a novel algorithm that is a modified version of CACLA. It relies on a third multilayer perceptron to estimate the absolute error of the critic which is used to correct the learning rule of the Actor. The Corrected CACLA is able to outperform the original CACLA algorithm. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.133.144.197

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Leuenberger, G. and Wiering, M. (2018). Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-758-275-2; ISSN 2184-433X, SciTePress, pages 53-60. DOI: 10.5220/0006556500530060

@conference{icaart18,
author={Gabriel Leuenberger. and Marco A. Wiering.},
title={Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games},
booktitle={Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2018},
pages={53-60},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006556500530060},
isbn={978-989-758-275-2},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games
SN - 978-989-758-275-2
IS - 2184-433X
AU - Leuenberger, G.
AU - Wiering, M.
PY - 2018
SP - 53
EP - 60
DO - 10.5220/0006556500530060
PB - SciTePress