Authors:
Douglas M. Guisi
1
;
Richardson Ribeiro
1
;
Marcelo Teixeira
1
;
André P. Borges
1
;
Eden R. Dosciatti
1
and
Fabrício Enembreck
2
Affiliations:
1
Federal University of Technology-Paraná, Brazil
;
2
Pontifical Catholic University-Paraná, Brazil
Keyword(s):
Multi-Agents Systems, Coordination Model, Reinforcement Learning, Hybrid Model.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Intelligent Agents
;
Internet Technology
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Multi-Agent Systems
;
Software Engineering
;
Symbolic Systems
;
Web Information Systems and Technologies
Abstract:
The main contribution of this paper is to implement a hybrid method of coordination from the combination
of interaction models developed previously. The interaction models are based on the sharing of rewards for
learning with multiple agents in order to discover interactively good quality policies. Exchange of rewards
among agents, when not occur properly, can cause delays in learning or even cause unexpected behavior,
making the cooperation inefficient and converging to a non-satisfactory policy. From these concepts, the
hybrid method uses the characteristics of each model, reducing possible conflicts between different policy
actions with rewards, improving the coordination of agents in reinforcement learning problems. Experimental
results show that the hybrid method can accelerate the convergence, rapidly gaining optimal policies
even in large spaces of states, exceeding the results of classical approaches to reinforcement learning.