Authors:
Richardson Ribeiro
1
;
Adriano F. Ronszcka
1
;
Marco A. C. Barbosa
1
;
Fábio Favarim
1
and
Fabrício Enembreck
2
Affiliations:
1
Federal University of Technology, Brazil
;
2
Pontificial Catholical University, Brazil
Keyword(s):
Swarm Intelligence, Ant-Colony Algorithms, Dynamic Environments.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Intelligent Agents
;
Internet Technology
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Multi-Agent Systems
;
Software Engineering
;
Symbolic Systems
;
Web Information Systems and Technologies
Abstract:
This paper proposes strategies for updating action policies in dynamic environments, and discusses the influence of learning parameters in algorithms based on swarm behavior. It is shown that inappropriate choices for learning parameters may cause delays in the learning process, or lead the convergence to an un-acceptable solution. Such problems are aggravated in dynamic environments, since the fit of algorithm pa-rameter values that use rewards is not enough to guarantee a satisfactory convergence. In this context, strat-egy-updating policies are proposed to modify reward values, thereby improving coordination between agents operating within dynamic environments. A framework has been developed which iteratively demonstrates the influence of parameters and updating strategies. Experimental results are reported which show that it is possible to accelerate convergence to a consistent global policy, improving the results achieved by classical approaches using algorithms based on swarm b
ehavior.
(More)