Figure 12: Comparisson with MOEA/D. K = 5,N = 100,
top 5 objectives, bottom 6 objectives.
the performance of the proposed algorithm at the
2,000th generation is already better than MOEA/D at
the 10,000th generation.
7 CONCLUSIONS
In this study, we focused on the ε-Sampling part of
Adaptiveε-Sampling and ε-Hood (AεSεH) algorithm,
and confirmed through experiments that AεSεH per-
forms better solution search with the improved pro-
posed method. This performance improvement was
more pronounced as the number of objectives in-
creased. This is attributed to the increased impor-
tance of a search process that emphasizes solution
uniformity in response to the expansion of the objec-
tive space as the number of fitness functions increase.
Since AεSεH is an algorithm developed for multi-
and many-objective optimization, this improvement
reinforces its many-objective characteristics. We also
found that improving solution uniformity leads to
the generation of solution distributions that are more
prone to ε-Sampling with high accuracy. In addition,
we showed that AεSεH scales up well with the di-
mension of the search space and complexity of the
problem.
In the future, we would like to explore dynamic
schedules for the amplification factor in the proposed
scheme, reflecting, for example, the size of the tar-
get population and the number of target individuals,
which would not only further reduce randomness but
also reduce the number of calculations.
REFERENCES
Aguirre, H., Oyama, A., and Tanaka, K. (2013). Adap-
tive ε-Sampling and ε-Hood for Evolutionary Many-
Objective Optimization. In Evolutionary Multi-
Criterion Optimization (EMO 2013). Lecture Notes
in Computer Science, volume 7811, pages 322–336.
Springer Berlin Heidelberg.
Aguirre, H., Yazawa, Y., Oyama, A., and Tanaka, K.
(2014). Extending AεSεH from Many-objective to
Multi-objective Optimization. In Simulated Evolution
and Learning (SEAL 2014). Lecture Notes in Com-
puter Science, volume 8886, pages 239–250. Springer,
Cham.
Aguirre, H. E. and Tanaka, K. (2007). Working Princi-
ples, Behavior, and Performance of MOEAs on MNK-
landscapes. European Journal of Operational Re-
search, 181(3):1670–1690.
Audet, C., Bigeon, J., Cartier, D., Le Digabel, S., and Sa-
lomon, L. (2021). Performance Indicators in Multi-
objective Optimization. European Journal of Opera-
tional Research, 292(2):397–422.
Coello, C., Van Veldhuizen, D., and Lamont, G. (2002).
Evolutionary Algorithms for Solving Multi-Objective
Problems. New York, Kluwer Academic Publishers.
Deb, K. (2001). Multi-Objective Optimization using Evolu-
tionary Algorithms. John Wiley & Sons.
Fonseca, C. M., Paquete, L., and L´opez-Ib´anez, M.
(2006). An Improved Dimension-Sweep Algorithm
for the Hypervolume Indicator. In 2006 IEEE In-
ternational Conference on Evolutionary Computation,
pages 1157–1163. IEEE.
Laumanns, M., Thiele, L., Deb, K., and Zitzler, E. (2002).
Combining Convergence and Diversity in Evolution-
ary Multiobjective Optimization. Evolutionary Com-
putation, 10(3):263–282.
von L¨ucken, C., Brizuela, C., and Bar´an, B. (2019).
An Overview on Evolutionary Algorithms for Many-
Objective Optimization Problems. Wiley Interdisci-
plinary Reviews: Data Mining and Knowledge Dis-
covery, 9(1):e1267.
Zhang, Q. and Li, H. (2007). MOEA/D: A Multiobjec-
tive Evolutionary Algorithm based on Decomposi-
tion. IEEE Transactions on Evolutionary Computa-
tion, 11(6):712–731.
Zheng, K., Yang, R.-J., Xu, H., and Hu, J. (2017). A
New Distribution Metric for Comparing Pareto Opti-
mal Solutions. Structural and Multidisciplinary Opti-
mization, 55:53–62.
Zitzler, E. (1999). Evolutionary Algorithms for Multiobjec-
tive Optimization: Methods and Applications. PhD
thesis, Diss. ETH No 13398, Swiss Federal Institute
of Technology Zurich.