Authors:
Kohei Suzuki
1
and
Shohei Kato
2
Affiliations:
1
Dept. of Computer Science and Engineering Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 and Japan
;
2
Dept. of Computer Science and Engineering Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan, Frontier Research Institute for Information Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 and Japan
Keyword(s):
Reinforcement Learning, Genetic Algorithm, Perceptual Aliasing.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Computational Intelligence
;
Evolutionary Computing
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Machine Learning
;
Soft Computing
;
Symbolic Systems
Abstract:
Perceptual aliasing is one of the major problems in applying reinforcement learning to the real world. Perceptual aliasing occurs in the POMDPs environment, where agents cannot observe states correctly, which makes reinforcement learning unsuccessful. HQ-learning is cited as a solution to perceptual aliasing. HQ-learning solves perceptual aliasing by using subgoals and subagent. However, subagents learn independently and have to relearn each time when subgoals change. In addition, the number of subgoals is fixed, and the number of episodes in reinforcement learning increases unless the number of subgoals is appropriate. In this paper, we propose the reinforcement learning method that generates subgoals using genetic algorithm. We also report the effectiveness of our method by some experiments with partially observable mazes.