Figure 5: ME2 Scalability using multiple cores. The x-axis
of this graph represents the number of cores employed, from
3 to 24. The Y-axis represents the speed-up of execution
time between single and multi-threaded ME2 implementa-
tions. The multi-threaded implementation of ME2 speeds-
up execution by nearly 7 folds when using 24 cores.
6 CONCLUSION
Map Explore & Exploit is a modular and scal-
able meta-heuristic, suitable for multi-modal multi-
dimension optimization, with better or similar per-
formance to other well-known search & optimization
algorithms. We present the methodology of ME2 in
detail, starting with Map and moving on to Explore
and Exploit. We compare ME2 to GA, PSO, SA and
CMA-ES. The comparison was carried out using sev-
eral scalable benchmark functions. ME2’s competi-
tive results are shown for dimensions 10, 20 and 50.
In addition, we demonstrate the computational scal-
ability of ME2 by comparing a single-threaded ver-
sion and a multiple-threaded one, running in a multi-
core processing environment. The results confirm
that ME2, due to the distributed nature of its last
two phases, is highly scalable. Multi-threaded ME2’s
running time decreases in a near-linear fashion, as
the number of processing nodes increases. Finally,
ME2’s tri-modular architecture allows researchers to
test other - potentially better - algorithms for each
search phase, as long as a proper fitness function is
defined for (the conclusion of) each phase. Exploring
that potential is our next research objective.
REFERENCES
Ackley, D. (1987). A Connectionist Machine for Genetic
Hillclimbing, volume SECS28 of The Kluwer Inter-
national Series in Engineering and Computer Science.
Kluwer Academic Publishers, Boston.
B
¨
ack, T., Foussette, C., and Krause, P. (2013). Contem-
porary Evolution Strategies. Springer-Verlag Berlin
Heidelberg.
Hansen, N. (2007). The cma evolution strategy.
http://cma.gforge.inria.fr/index.html.
Hansen, N. (2009). Benchmarking a BI-population CMA-
ES on the BBOB-2009 function testbed. In Workshop
Proceedings of the GECCO Genetic and Evolutionary
Computation Conference, pages 2389–2395. ACM.
Hansen, N. and Ostermeier, A. (2001). Completely deran-
domized self-adaptation in evolution strategies. Evo-
lutionary Computation, 9(2):159–195.
Holland, J. H. (1992). Adaptation in natural and artificial
systems. MIT Press, Cambridge, MA, USA.
Islam, M. (2019). Me2: Map explore & exploit. version
1.0. https://github.com/mohiul/ME2-Map-Explore-
Exploit/releases.
Kennedy, J. and Eberhart, R. (1995). Particle swarm opti-
mization. In Proceedings of ICNN’95 - International
Conference on Neural Networks, volume 4, pages
1942–1948 vol.4.
Kiranyaz, S., Ince, T., and Gabbouj, M. (2015). Multidi-
mensional Particle Swarm Optimization for Machine
Learning and Pattern Recognition. Springer Publish-
ing Company, Incorporated, 1st edition.
Luke, S. (1998). ECJ evolutionary com-
putation library. Available for free at
http://cs.gmu.edu/∼eclab/projects/ecj/.
Luke, S. (2017). Ecj then and now. In Proceedings
of the Genetic and Evolutionary Computation Con-
ference Companion, GECCO ’17, pages 1223–1230,
New York, NY, USA. ACM.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N.,
Teller, A. H., and Teller, E. (1953). Equation of state
calculations by fast computing machines. The Journal
of Chemical Physics, 21(6):1087–1092.
Ostermeier, A., Gawelczyk, A., and Hansen, N. (1994).
Step-size adaptation based on non-local use of selec-
tion information. In Davidor, Y., Schwefel, H.-P., and
M
¨
anner, R., editors, Parallel Problem Solving from
Nature — PPSN III, pages 189–198, Berlin, Heidel-
berg. Springer Berlin Heidelberg.
Rastrigin, L. A. (1974). Systems of extremal control.
Nauka.
Rosenbrock, H. H. (1960). An automatic method for finding
the greatest or least value of a function. The Computer
Journal, 3(3):175–184.
Schwefel, H.-P. (1981). Numerical Optimization of Com-
puter Models. John Wiley & Sons, Inc., New York,
NY, USA.
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
204