Authors:
Evgenii Sopov
and
Alexey Vakhnin
Affiliation:
Department of System Analysis and Operations Research, Reshetnev Siberian State University of Science and Technology, Krasnoyarsk and Russia
Keyword(s):
Large-Scale Global Optimization, Problem Decomposition, Variable Grouping, Cooperative Coevolution, Evolutionary Algorithms.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Co-Evolution and Collective Behavior
;
Computational Intelligence
;
Evolutionary Computing
;
Soft Computing
Abstract:
Large-scale global optimization (LSGO) is known as one of the most challenging problem for many search algorithms. Many well-known real-world LSGO problems are not separable and are complex for comprehensive analysis, thus they are viewed as the black-box optimization problems. The most advanced algorithms for LSGO are based on cooperative coevolution with problem decomposition using grouping methods. The random adaptive grouping algorithm (RAG) combines the ideas of random dynamic grouping and learning dynamic grouping. In our previous studies, we have demonstrated that cooperative coevolution (CC) of the Self-adaptive Differential Evolution (DE) with Neighborhood Search (SaNSDE) with RAG (DECC-RAG) outperforms some state-of-the-art LSGO algorithms on the LSGO benchmarks proposed within the IEEE CEC 2010 and 2013. Nevertheless, the performance of the RAG algorithm can be improved by tuning the number of subcomponents. Moreover, there is a hypothesis that the number of subcomponents
should vary during the run. In this study, we have performed an experimental analysis of parameter tuning in the RAG. The results show that the algorithm performs better when using subcomponents of larger size. In addition, some improvement can be done by applying dynamic group sizing.
(More)