Authors:
Simyung Chang
1
;
YoungJoon Yoo
2
;
Jaeseok Choi
3
and
Nojun Kwak
3
Affiliations:
1
Seoul National University, Seoul, Korea, Samsung Electronics, Suwon and Korea
;
2
Clova AI Research, NAVER Corp, Seongnam and Korea
;
3
Seoul National University, Seoul and Korea
Keyword(s):
BOOK, Book Learning, Reinforcement Learning.
Abstract:
We introduce a novel method to train agents of reinforcement learning (RL) by sharing knowledge in a way similar to the concept of using a book. The recorded information in the form of a book is the main means by which humans learn knowledge. Nevertheless, the conventional deep RL methods have mainly focused either on experiential learning where the agent learns through interactions with the environment from the start or on imitation learning that tries to mimic the teacher. Contrary to these, our proposed book learning shares key information among different agents in a book-like manner by delving into the following two characteristic features: (1) By defining the linguistic function, input states can be clustered semantically into a relatively small number of core clusters, which are forwarded to other RL agents in a prescribed manner. (2) By defining state priorities and the contents for recording, core experiences can be selected and stored in a small container. We call this conta
iner as ‘BOOK’. Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents.
(More)