A Modified Preference-Based Hypervolume Indicator for Interactive
Evolutionary Multiobjective Optimization Methods
MaoMao Liang
a
, Babooshka Shavazipour
b
, Bhupinder Saini
c
, Michael Emmerich
d
and Kaisa Miettinen
e
University of Jyvaskyla, Faculty of Information Technology, P.O. Box 35 (Agora), 40014 University of Jyvaskyla, Finland
{maomao.m.liang, babooshka.b.shavazipour, bhupinder.s.saini, michael.t.m.emmerich, kaisa.miettinen}@jyu.fi
Keywords:
Multiple Objective Optimization, Interactive Methods, Performance Indicators, Region of Interest,
Evolutionary Algorithms, Preference-Based Hypervolume.
Abstract:
Various interactive evolutionary multiobjective optimization methods have been proposed in the literature for
problems with multiple, conflicting objective functions. In these methods, a decision maker, who is a domain
expert, iteratively provides preference information to guide the solution process while gaining insight into
the problem. To compare interactive evolutionary multiobjective optimization methods, a preference-based
hypervolume indicator (PHI) has been proposed to quantify the performance of the methods. PHI was the first
indicator designed based on some desirable properties of indicators for interactive evolutionary multiobjective
optimization methods. However, it has some shortcomings, such as excluding some potentially interesting
solutions and being limited to consider a reference point as a type of preference information. In this paper, a
modified indicator called PHI
+
is proposed to address the mentioned drawbacks. PHI
+
modifies the region of
interest in PHI. While PHI is directed at methods where a decision maker provides preference information in
the form of a reference point, PHI
+
is applicable for methods that utilize desirable ranges of objective function
values as preference information. Therefore, PHI
+
is the first indicator that can handle preference information
provided as desirable ranges when evaluating interactive methods. Experimental results show that PHI
+
can
also better distinguish differences in the performance of interactive evolutionary multiobjective optimization
methods.
1 INTRODUCTION
Many real-world problems involve multiple (con-
flicting) objective functions, and these problems
are called multiobjective optimization problems
(MOPs) (Sawaragi et al., 1985). For MOPs, it is typ-
ically impossible to find a solution where all objec-
tive functions can attain their optimal values. Instead,
there are many compromise solutions, called Pareto
optimal solutions (Sawaragi et al., 1985), representing
different trade-offs between the objective functions.
When multiple Pareto optimal solutions exist, a
decision maker (DM), a person with domain exper-
tise of the problem being solved, is usually introduced
to express preference information and determine the
most preferred solution. If preferences of a DM are
a
https://orcid.org/0009-0005-8567-8236
b
https://orcid.org/0000-0002-6516-4423
c
https://orcid.org/0000-0003-2455-3008
d
https://orcid.org/0000-0002-7342-2090
e
https://orcid.org/0000-0003-1013-4689
considered, multiobjective optimization methods can
be divided into three categories (Miettinen, 1999): a
priori, a posteriori, and interactive methods.
In a priori methods, a DM provides preference in-
formation before optimization. In contrast, a posteri-
ori methods generate a representative set of Pareto op-
timal solutions to be considered. In interactive meth-
ods, a DM iteratively provides preference information
to guide the solution process, and focus only on those
Pareto optimal solutions that are of interest to the DM.
Therefore, these methods can save computational re-
sources and put less cognitive load on the DM at a
time.
There are many interactive multiobjective opti-
mization methods (Miettinen et al., 2008, 2016) that
can be used to solve MOPs. Among them, evolu-
tionary multiobjective optimization methods (Branke
et al., 2008) are population-based methods that can
be used to solve problems that have, e.g., non-
differentiable and discontinuous objective functions.
For compactness, in the following, we use the term
214
Liang, M., Shavazipour, B., Saini, B., Emmerich, M. and Miettinen, K.
A Modified Preference-Based Hypervolume Indicator for Interactive Evolutionary Multiobjective Optimization Methods.
DOI: 10.5220/0012934600003837
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 16th International Joint Conference on Computational Intelligence (IJCCI 2024), pages 214-221
ISBN: 978-989-758-721-4; ISSN: 2184-3236
Proceedings Copyright © 2024 by SCITEPRESS – Science and Technology Publications, Lda.
“methods” to refer to evolutionary multiobjective op-
timization methods since we focus on them.
Although many interactive methods have been
published, it is difficult to quantify their performance
due to the lack of appropriate quality indicators (Afsar
et al., 2021). Aghaei Pour et al. (2022) proposed the
desirable properties of indicators for interactive meth-
ods and the first indicator designed based on these
desirable properties was proposed in (Aghaei Pour
et al., 2024). It is called a preference-based hyper-
volume indicator (PHI) and it mainly calculates the
hypervolume (HV) (Zitzler and Thiele, 1998) of solu-
tions within a region of interest (ROI) defined using
the preference information provided by a DM. This is
assumed to be in the form of a reference point repre-
senting desirable objective function values.
We observed that certain solutions of potential in-
terest to a DM are excluded from the definition of a
ROI in Aghaei Pour et al. (2024), indicating that this
definition does not adequately highlight solutions that
reflect the preference information. This can influence
the comparison. Furthermore, PHI can only be used
to compare methods that involve a reference point.
In this paper, we modify the definition of the ROI
in PHI, and call the resulting modified PHI as PHI
+
.
This enables us to more accurately understand the
performance of the methods being evaluated. Impor-
tantly, PHI
+
is the first indicator that can be used to
compare interactive methods with desirable ranges as
the type of preference information. Unlike PHI
+
, PHI
requires setting a point of poor values to calculate HV
values. Moreover, PHI values are limited to [0, 2],
while PHI
+
values range from 0 to infinity. Thus,
PHI
+
values allow us to distinguish more clearly dif-
ferences in the performance of the methods.
2 BACKGROUND
A MOP (Sawaragi et al., 1985) can be expressed as:
minimize f(x) = ( f
1
(x), f
2
(x),..., f
k
(x))
subject x S,
(1)
where x = (x
1
,x
2
,...,x
n
) is a decision variable vector
in a feasible region S of the decision space R
n
. There
are k objective functions f
1
, f
2
,..., f
k
which map any
feasible x to an objective vector f(x) in the so-called
objective space R
k
. Additionally, while all objective
functions are minimized in (1), objective functions to
be maximized can be handled by multiplication by -1.
A solution x
1
is said to dominate another solu-
tion x
2
if f
i
(x
1
) f
i
(x
2
) for i = 1, 2, . . . , k, and
f
i
(x
1
) < f
i
(x
2
) for at least one i = 1, 2, . . . , k. A fea-
sible solution is Pareto optimal, if it is not dominated
in S. The image of a set of all Pareto optimal solutions
is called a Pareto front (PF) in R
k
.
An ideal point z
R
k
is a vector consisting of
the lowest values for each objective found in a PF. On
the other hand, a nadir point z
nad
R
k
consists of
the highest values found in a PF. It is usually approx-
imated since the PF is not known Miettinen (1999).
Interactive methods ask a DM to iteratively pro-
vide preference information to guide the solution pro-
cess. When observing solution processes with inter-
active methods, one can often notice two phases (Mi-
ettinen et al., 2008). The goal of a so-called learning
phase is to allow a DM to explore different solutions
and improve their understanding of problems until an
ROI can be identified. The goal of the next phase,
called a decision phase, is to fine-tune the search in
the ROI to find a solution that satisfies the DM.
A common way to express preferences is
to use reference points (L
´
arraga and Miettinen,
2022; Wierzbicki, 1980). A reference point r =
(r
1
,r
2
,...,r
k
) is a vector in the objective space that
represents desirable values for each objective func-
tion. Tanabe and Li (pear) identify three most com-
monly used definitions of ROIs in methods using ref-
erence points. They are ROIs based on a closest point,
ROIs based on an achievement scalarizing function
(ASF) (Wierzbicki, 1980) and ROIs based on the
Pareto dominance relation.
Besides a reference point, other types of prefer-
ence information can be used (Luque et al., 2011).
An example is a desirable range of objective function
values, where a DM provides information defining a
range. The desirable ranges constitute an ROI (Haka-
nen et al., 2016; Manuel et al., 2022) that includes all
Pareto optimal solutions that lie within the desirable
ranges.
With an ROI based on the Pareto dominance re-
lation, Aghaei Pour et al. (2024) proposed PHI to
evaluate the performance of interactive methods. PHI
uses an HV, and besides incorporating the concept
of Pareto dominance, it also tries to capture the effi-
ciency of utilizing computational resources by penal-
izing solutions that fall outside an ROI, i.e., solutions
that do not reflect the reference point.
In Aghaei Pour et al. (2024), a so-called dystopian
point z
d
R
k
is defined as z
d
i
= z
nad
i
+ ε, i =
1,2,...,k, with a very small constant ε > 0. It is used
in the calculation of the HV.
We denote a set of Pareto optimal solutions gener-
ated by an interactive method as P. If P does not in-
clude any point that dominates the reference point r,
the ROI includes all solutions of P that are dominated
by r, and the set composed of these solutions is called
P
1
. Alternatively, if r is dominated by at least one so-
A Modified Preference-Based Hypervolume Indicator for Interactive Evolutionary Multiobjective Optimization Methods
215
lution in P, the ROI includes all solutions that dom-
inate r, and the set composed of these solutions is
called P
2
. In Figure 1, black dots represent solutions
outside the ROI, black stars represent P
1
or P
2
.
(a) Case 1: When r is not dominated
(b) Case 2: When r is dominated
Figure 1: Visualization of components of PHI for a bi-
objective problem.
To define PHI, according to the definition of HV
and z
d
, so-called positive parts and negative parts are
defined as the green and pink areas in Figure 1, re-
spectively. Formally, for a solution set P, the so-called
negative contribution v
can be expressed as:
v
= HV (P {r}, z
d
) HV (P
2
{r}, z
d
) (2)
and the so-called positive contribution v
+
constituting
of v
1
and v
2
can be defined as:
v
1
=
(
HV (P,z
d
) v
, if P
2
=
HV (r,z
d
), otherwise,
(3)
v
2
=
(
0, if P
2
=
HV (P
2
,z
d
) v
1
, otherwise,
(4)
v
+
= v
1
+ v
2
=
(
HV (P,z
d
) v
, if P
2
=
HV (P
2
,z
d
), otherwise.
(5)
Based on the negative and positive contributions,
PHI can be expressed as:
PHI(P,r,z
d
) =
v
1
HV (r,z
d
)
+
v
2
HV (P,z
d
)
=
(
v
1
HV (r,z
d
)
, if P
2
=
1 +
v
2
HV (P,z
d
)
, otherwise.
(6)
Compared with indicators designed for a priori
methods, PHI can provide more information about the
method being evaluated. If at least one Pareto opti-
mal solution dominates r, then r is considered attain-
able. On the contrary, if r is not dominated, then it
is considered unattainable. When the PHI value is in
[0,1], we know that r is unattainable, and when the
PHI value is greater than 1, we know that r is attain-
able. From (6), it can be seen that the higher the value
of PHI, the better the performance of the method that
generated P. The maximum value of PHI is 2.
3 NEW INDICATOR PHI
+
Before introducing the proposed indicator PHI
+
, we
modify ROI in the definition of PHI. In this way, we
avoid some of the shortcomings of PHI.
3.1 Modified Region of Interest
As stated in Section 2, the ROI of PHI is determined
based only on the points that either dominate the ref-
erence point r or are dominated by it, but it falls
short in considering points that are incomparable to
the reference point. However, one can argue that in-
comparable solutions that are close to the reference
point may also be of interest to the DM and should
therefore have a positive contribution to the indicator
value. For example, solutions b and c in Figure 2 (a),
and solution b in Figure 2 (b) will not be included in
the ROI (blue dot zone) as defined in PHI. However,
a DM applying an interactive method is expected to
learn and update their preferences. These close-by
solutions may provide information of interest to the
DM and should not be punished using a negative con-
tribution.
Moreover, if there are some solutions that are far
away from r and dominated by r, they will be in-
cluded in the ROI, although the indicator for sets con-
taining such solutions should indicate that these sets
are not adequately addressing the preferences of the
DM and that these solutions are not close to r . There-
fore, the definition of ROI that takes these situations
into account may provide more insight into the perfor-
mance of interactive methods applied by a real DM.
Taking into account other ROIs mentioned in Sec-
tion 2, the ROI based on an ASF cannot be easily used
for PHI because parameters required to calculate the
value of ASF are not considered within the method to
calculate PHI. To consider such an ROI in PHI, many
additional parameters need to be defined which are
difficult for DM to provide, so we do not use it in this
paper. On the other hand, the ROI based on the clos-
ECTA 2024 - 16th International Conference on Evolutionary Computation Theory and Applications
216
est point considers the solution closest to r, that is,
the solution that best satisfies the preferences. It is in-
cluded in the ROI. This means that there is at least one
solution in the ROI, even though all solutions may be
far away from r, and the DM may not be interested in
all solutions. Moreover, if distances from other solu-
tions to the closest point are all greater than the radius
of this ROI, there is no guarantee that this ROI con-
tains the majority of solutions that satisfy the DM’s
preferences. Another weakness of this ROI is that it
is indifferent to Pareto dominance, and solutions dom-
inating the reference point are ignored.
Since the current ROI definitions in the literature
are insufficient for filtering solutions that accurately
depict the performance of methods, they cannot be
used in the context of PHI. Therefore, we introduce a
modified ROI to overcome the mentioned limitations,
and propose a new indicator PHI
+
as a variant of PHI
utilizing the modified ROI.
(a) Case 1: When r is not dominated
(b) Case 2: When r is dominated
Figure 2: The modified ROI.
As illustrated in Figure 2, a DM may express the
desirable range by providing a reference point and ac-
ceptable positive and negative deviations of it. The
side length on the left side (or bottom side) of r and
the side length on the right side (or top side) of r can
be set separately. They can be referred to as sl
i
and
sr
i
on each objective function, i = 1,2,...,k. In this
paper, we use the same deviation on each objective
function. Then up and lp are the upper right corner
and lower left corner of the desirable range, respec-
tively.
Once we have desirable ranges, we can modify the
definition of a ROI as shown in Figure 2. When r
is not dominated by any solution in P, as shown in
Figure 2 (a), solutions in the green zone but not in
the pink grid zone are included in the modified ROI.
When r is dominated, as shown in Figure 2 (b), so-
lutions in the green zone are included in the modi-
fied ROI. The area of the green zone on the lower left
is infinite, which represents the zone where solutions
dominating r are located.
The modified ROI contains all solutions that sat-
isfy the DM’s preferences. It also includes solutions
that best satisfy the preferences.
3.2 Modified Preference-Based
Hypervolume Indicator
Based on the modified ROI, we propose a modified
PHI, and call it PHI
+
, to address some shortcomings
of PHI. If we use the modified ROI for PHI described
in Section 2 and set a dystopian point z
d
as mentioned
in Section 2, when r is not attainable, the PHI value
can be greater than 1. For example, in Figure 3, the
HV value of solutions in a modified ROI is obviously
greater than the HV value of r, so the PHI value is
greater than 1. In such cases, it is difficult to distin-
guish whether r is attainable through PHI values.
Figure 3: An example of using a modified ROI and setting
a z
d
that is different from up in PHI.
Moreover, when r is attainable, as in (6), the de-
nominator in PHI depends on the method under eval-
uation, and the PHI value remains the same when the
numerator and the denominator decrease or increase
at the same rate. Since denominators are different
in the PHI for different methods, we cannot directly
know the differences in performance of the compared
methods through the PHI values. Furthermore, when
the PHI value reaches its maximum value, that is,
when r is dominated and r is the same as z
d
, the
PHI value will remain at 2 even if the HV value of
solutions in the ROI changes.
Therefore, to make the indicator work well with
A Modified Preference-Based Hypervolume Indicator for Interactive Evolutionary Multiobjective Optimization Methods
217
the modified ROI and to enable it to reflect more ac-
curately the performance of the methods being evalu-
ated, we propose a new indicator PHI
+
that is a vari-
ant of PHI. In PHI
+
, up is used as a dystopian point
z
d
to calculate HV values, so unlike PHI, we do not
need to set a point z
d
separately. We refer to the set
consisting of solutions in the modified ROI as P
+
, and
the HV value of solutions in P
+
as mv
+
defined as:
mv
+
= HV (P
+
,up). (7)
Based on mv
+
, we define PHI
+
as:
PHI
+
(P,r, z
d
) =
(
mv
+
HV (lp,up)sl
1
×sl
2
×···×sl
k
, if P
2
=
mv
+
HV (r,up)
, otherwise.
(8)
To allow PHI
+
to communicate similar information
as the original PHI, that is, when r is attainable, the
PHI
+
value exceeds 1, and when r is not attainable,
the PHI
+
value falls below 1, we set different denom-
inators depending on whether r is attainable or not.
(a) Case 1: When r is not dominated by any
solution
(b) Case 2: When r is dominated by some solution
Figure 4: An example of the calculation of PHI
+
.
When r is unattainable, no solution dominates it,
for example, in the gray grid area in Figure 4 (a). To
remove this area from the calculation and to make the
indicator value more accurate, we subtract sl
1
× sl
2
×
··· × sl
k
from HV(lp,up). We calculate the PHI
+
value by dividing the HV value of the solution in the
modified ROI by the HV value of lp excluding the
region where no solutions exist.
In this case, when the diversity and convergence of
solutions within the modified ROI is better, the PHI
+
value is higher, and the maximum PHI
+
value is 1.
For example, in Figure 4 (a), using up = (10,6)
T
as
z
d
for the calculation of HV, the HV value of solu-
tions in the modified ROI is 15, namely mv
+
is 15,
the HV value of lp (green area) minus the gray grid
region area is 24, so the PHI
+
value is 0.625.
When r is attainable, the PHI
+
value is calculated
by dividing the HV value of the solution in the modi-
fied ROI by the HV value of r. In this case, the better
the diversity and convergence of the solutions within
the modified ROI, the higher the PHI
+
value, and the
value is higher than 1. For example, in Figure 4 (b),
using up = (10, 10) as z
d
for the calculation of HV,
the HV value of solutions in the modified ROI is 50,
namely mv
+
is 50, the HV value of r is 16, so the
PHI
+
value is 3.125.
Unlike the original PHI, PHI
+
evaluates solutions
that are close to r and incomparable to r, the denomi-
nator is the same when r is attainable or r is unattain-
able. If all solutions are far from r, resulting in no
solutions within the modified ROI, PHI
+
values will
not inaccurately represent the performance of meth-
ods. Thus, the PHI
+
values allow us to understand the
performance differences between interactive methods
more accurately and clearly. Furthermore, PHI can
only be used for interactive methods with a reference
point, while PHI
+
can be used for methods with de-
sirable ranges as preference information. For this, a
DM can provide a reference point and deviations of
it.
4 NUMERICAL EXAMPLE
In this section, we illustrate the functionality of
PHI
+
in assessing interactive methods, and com-
pare it against PHI. We apply the Interactive RVEA
method (Hakanen et al., 2016), referred to as IRVEA,
which uses RVEA (Cheng et al., 2016) as the un-
derlying evolutionary algorithm. We chose this
method because the source code is openly avail-
able in the DESDEO framework (Misitano et al.,
2021). Due to the page limitation, supplemen-
tary materials, and the source code of PHI
+
are
available at https://optgroup.it.jyu.fi/material.php or
https://doi.org/10.5281/zenodo.13587354.
4.1 Test Problem
We use the problem RE2-3-2, which has 2 objec-
tive functions and 3 decision variables, from the REal
world problem (RE) suite (Tanabe and Ishibuchi,
ECTA 2024 - 16th International Conference on Evolutionary Computation Theory and Applications
218
2020) as a test problem, because it is built on real-
world problems. We assume 6 iterations to be taken.
As in Aghaei Pour et al. (2024), we consider two
phases so that the first 4 iterations are in the learning
phase, and the last 2 iterations in the decision phase.
For a fair comparison, we set the same number
of generations in each iteration. We set the number
of generations as 1 since we use it only to visualize
how PHI
+
works and to avoid all solutions converg-
ing to one location. To make it more likely to obtain
a feasible solution, we set the population size to 100.
Since the ideal and nadir points of problems are pro-
vided in Tanabe and Ishibuchi (2020), we normalize
function values before calculating the PHI
+
value to
improve its accuracy. The dystopian point for PHI is
set to the corresponding nadir point.
4.2 Optimization Process of RE2-3-2
To visually compare the differences between the way
PHI and PHI
+
work, we show how they are calcu-
lated on the RE2-3-2 in Figure 5. In the visualization,
the ROI of PHI is represented by the area surrounded
by blue lines, the solutions that dominate r are in the
square surrounded by blue solid lines (Figure 5 (a)),
and the solutions dominated by r are in the square
surrounded by blue dashed lines. The modified ROI
of PHI
+
is indicated by the area enclosed by the blue
line, excluding the gray grid area (Figure 5 (c)).
Naturally, the preferences depend on the solutions
of the previous iteration. For compactness, we list
them as follows:
a) Iteration 1: The DM sets desirable ranges, we
derive r = (25, 12.5) and the deviations on objective
functions are 25 and 12.5 from there. The DM ex-
press the initial preference to understand what types
of solutions may exist.
b) Iteration 2: The DM wants to learn more about
functions, and sets desirable ranges from which we
derive r = (160, 10) and the deviations on objective
functions are 25 and 12.5. Based on the solutions
found in Iteration 1, the DM sets the preference in-
formation to see more solutions in the ROI.
c) Iteration 3: The DM sets new desirable ranges
that correspond to r = (20, 100) and the deviations on
objective functions are 25 and 12.5, to study more the
first objective function. The DM is interested in learn-
ing about different parts of the PF. Therefore, based
on the solutions seen in Iteration 2, they provide new
preference information to see more solutions in the
new ROI.
d) Iteration 4: The DM sets desirable ranges lead-
ing to r = (10, 5) the deviations on objective func-
tions are 10 and 5, expecting to see lower values on
objective functions. Same as Iteration 3, the DM is
interested in seeing more solutions in a different part
of the PF.
e) Iteration 5: The DM provides desirable ranges
to see if the solutions obtained meet their expecta-
tions, and we get r = (30, 5) and the deviations on
objective functions are 10 and 5. In the previous iter-
ation, the DM has found the region that they are most
interested in. They set the new preference informa-
tion to focus more in the same (and nearby) region
and find more solutions of interest.
f ) Iteration 6: The DM sets desirable ranges cor-
responding to r = (5.9, 0) and the deviations on objec-
tive functions are 10 and 5, expecting to see lower val-
ues on objective functions. The DM found that their
preferences were pessimistic and easily achieved in
Iteration 5. They provide new preference information
based on the knowledge gained to find more desirable
solutions.
We have no room for studying all iterations, but
to demonstrate differences between PHI and PHI
+
in
the decision phase of IRVEA solving RE2-3-2, we ob-
serve their performance in Iterations 5 and 6. In the
5th iteration, when r is attainable, as shown in Fig-
ure 5 (a), the PHI value is (1 +A1/(A1 +A2)), that is,
one plus the volume of the green area divided by the
sum of volumes of white and green areas. As shown
in Figure 5 (c), the PHI
+
value is (A3 + A4)/A4, that
is, the sum of volumes of green and gray grid areas
divided by the volume of the gray grid area.
In Figure 5 (a), since z
d
is far away from r, all
solutions appear to be close to r. To clearly show the
distance between them, we zoom in a part of Figure 5
(a) and show the resulting image in Figure 5 (b). As
shown in Figure 5 (b) and 5 (c), there is a solution
that is close to r and incomparable to r. It may pro-
vide information of interest to the DM. This solution
is in the modified ROI for PHI
+
but not in the ROI
for PHI, and it is evaluated in PHI
+
but not evaluated
in PHI. Therefore, it can be seen that PHI
+
can more
accurately represent the performance of the method.
We show the PHI and PHI
+
values of the six iter-
ations with IRVEA in Table 1. In the 5th iteration in
Table 1, the PHI value is 1.083, and the PHI
+
value
is 5.779. When r is attainable, the PHI value is lim-
ited to [1,2], the PHI
+
value ranges from 1 to infinity.
As all the calculations were done after normalization
of objective function values, the difference in the at-
tained PHI and PHI
+
values is due to the denomina-
tor used in their calculations. PHI
+
has a smaller de-
nominator, emphasising performance differences due
to small changes in the solution sets. Therefore, the
PHI
+
can more clearly measure the differences in the
performance between the methods than the PHI.
A Modified Preference-Based Hypervolume Indicator for Interactive Evolutionary Multiobjective Optimization Methods
219
(a) Calculation of PHI in iteration 5 (b) Calculation of PHI in iteration 5
(Partially enlarged)
(c) Calculation of PHI
+
in iteration 5
(d) Calculation of PHI in iteration 6 (e) Calculation of PHI in iteration 6
(Partially enlarged)
(f) Calculation of PHI
+
in iteration 6
Figure 5: Pareto optimal solutions obtained by IRVEA on RE2-3-2.
In the 6th iteration, when r is unattainable, as
shown in Figure 5 (d), the PHI value is (A1/(A1 +
A2)), that is, the volume of the green area divided
by the sum of volumes of the white and green ar-
eas. As shown in Figure 5 (f), the PHI
+
value is
A3/(A3 + A4), that is, the volume of the green area
divided by the sum of volumes of the white and green
areas. To clearly show the distance between solutions,
we zoom in a part of Figure 5 (d) and show the result-
ing image in Figure 5 (e). In Figures 5 (e) and 5 (f),
there is a solution that is far away from r and dom-
inated by r, so the DM may not be interested in it.
And it is not in the modified ROI of PHI
+
but it is
in the ROI of PHI. Thus, it is not evaluated in PHI
+
but evaluated in PHI. Therefore, PHI
+
can more ac-
curately represent the performance of methods.
Table 1: The PHI and PHI
+
values of IRVEA on RE2-3-2.
Iteration 1 2 3
PHI 1.093990076 1.461817041 1.55402991
PHI
+
3.104009333 12.48452761 13.512463
Iteration 4 5 6
PHI 1.02790275 1.082717159 0.995462552
PHI
+
2.182101576 5.779339391 0.176329054
In the 6th iteration in Table 1, the PHI value is
0.995 and the PHI
+
value is 0.176. The large differ-
ence is because, when r is not dominated, the ROI of
PHI is much larger than the ROI of PHI
+
. As this is
the final iteration of the solution process, the DM is
confident about the solutions they wish to obtain, and
thus have a good intuition about the bounds of their
ROI. PHI
+
more accurately captures this information,
due to the more strictly bounded nature of its ROI,
compared to PHI. Therefore, PHI
+
is more sensitive
to changes in the solution set when r is unattainable
as well, leading to a more accurate measurement of
performance differences between different methods.
5 CONCLUSIONS
In this paper, we discussed the limitations of the only
quality indicator, PHI, developed for evaluating the
performance of interactive evolutionary multiobjec-
tive optimization methods based on their desirable
properties. Furthermore, we proposed a new indicator
PHI
+
by modifying the ROI that is an element of the
indicator, and the indicator formulation to address the
limitations of PHI. Compared to PHI, PHI
+
can bet-
ter evaluate the performance of interactive methods,
and it can be used to compare methods with desirable
ranges as preference information.
In addition to the method of defining the modified
ROI in Section 3.1, there is another way to determine
the modified ROI only when the deviations on each
objective function are the same. The preference in-
formation provided by a DM can be a reference point
and the upper bounds of acceptable objective function
values. The modified ROI includes this range and the
extension range that includes solutions better than so-
lutions in this range.
Overall, PHI
+
follows all the desirable proper-
ties possessed by PHI, and PHI
+
is more sensitive to
small differences in the solution sets found by dif-
ECTA 2024 - 16th International Conference on Evolutionary Computation Theory and Applications
220
ferent methods than PHI. Furthermore, we can di-
rectly understand the differences between the com-
pared methods through PHI
+
values. The experimen-
tal results show that PHI
+
can more clearly reflect the
changes in the solutions within the ROI.
Although PHI
+
has more advantages than PHI,
there is still room for further improvement. An as-
pect to explore further is the situation where multiple
methods do not obtain a solution in the modified ROI.
In this case, it is not easy to distinguish these methods
because PHI
+
values for them are all zero. Therefore,
our next step is to evaluate solutions outside the mod-
ified ROI so that PHI
+
can work properly even when
there is no solution in the modified ROI.
Additionally, to provide information in the PHI
+
values about the attainability of the desirable ranges,
we had to accept some discontinuity in the PHI
+
val-
ues near 1. From our observations, resolving this dis-
continuity without harming other valuable properties
is challenging. Therefore, this is also our future work.
ACKNOWLEDGEMENTS
This research has received part of the funding from
the European Union – NextGenerationEU instrument
and was therefore partly funded by the Research
Council of Finland, grant number 352784, partly by
grant number 355346 of the same Council and is re-
lated to the thematic research area Decision Analyt-
ics utilizing Causal Models and Multiobjective Opti-
mization (jyu.fi/demo) of the University of Jyvaskyla.
REFERENCES
Afsar, B., Miettinen, K., and Ruiz, F. (2021). Assessing the
performance of interactive multiobjective optimiza-
tion methods: A survey. ACM Computing Surveys,
54(4):1–27.
Aghaei Pour, P., Bandaru, S., Afsar, B., Emmerich, M.,
and Miettinen, K. (2024). A performance indicator
for interactive evolutionary multiobjective optimiza-
tion methods. IEEE Transactions on Evolutionary
Computation, 28(3):778–787.
Aghaei Pour, P., Bandaru, S., Afsar, B., and Miettinen,
K. (2022). Desirable properties of performance indi-
cators for assessing interactive evolutionary multiob-
jective optimization methods. In Proceedings of the
Genetic and Evolutionary Computation Conference,
Companion, pages 1803–1811. ACM.
Branke, J., Deb, K., Miettinen, K., and Slowinski, R., edi-
tors (2008). Multiobjective Optimization: Interactive
and Evolutionary Approaches. Springer.
Cheng, R., Jin, Y., Olhofer, M., and Sendhoff, B. (2016).
A reference vector guided evolutionary algorithm for
many-objective optimization. IEEE Transactions on
Evolutionary Computation, 20(5):773–791.
Hakanen, J., Chugh, T., Sindhya, K., Jin, Y., and Miet-
tinen, K. (2016). Connections of reference vectors
and different types of preference information in in-
teractive multiobjective evolutionary algorithms. In
2016 IEEE Symposium Series on Computational In-
telligence (SSCI), pages 1–8. IEEE.
L
´
arraga, G. and Miettinen, K. (2022). Interactive MOEA/D
with multiple types of preference information. In
Proceedings of the Genetic and Evolutionary Com-
putation Conference, Companion, pages 1826–1834.
ACM.
Luque, M., Ruiz, F., and Miettinen, K. (2011). Global for-
mulation for interactive multiobjective optimization.
OR Spectrum, 33:27–48.
Manuel, M., Hien, B., Conrady, S., Kreddig, A., Doan, N.
A. V., and Stechele, W. (2022). Region of interest
based non-dominated sorting genetic algorithm-II: an
invite and conquer approach. In Proceedings of the
Genetic and Evolutionary Computation Conference,
pages 556–564. ACM.
Miettinen, K. (1999). Nonlinear Multiobjective Optimiza-
tion. Kluwer.
Miettinen, K., Hakanen, J., and Podkopaev, D. (2016). In-
teractive nonlinear multiobjective optimization meth-
ods. In Multiple Criteria Decision Analysis: State of
the Art Surveys, pages 927–976. Springer.
Miettinen, K., Ruiz, F., and Wierzbicki, A. P. (2008). Intro-
duction to multiobjective optimization: Interactive ap-
proaches. In Multiobjective Optimization: Interactive
and Evolutionary Approaches, pages 27–57. Springer.
Misitano, G., Saini, B. S., Afsar, B., Shavazipour, B., and
Miettinen, K. (2021). DESDEO: The modular and
open source framework for interactive multiobjective
optimization. IEEE Access, 9:148277–148295.
Sawaragi, Y., Nakayama, H., and Tanino, T. (1985). Theory
of Multiobjective Optimization. Elsevier.
Tanabe, R. and Ishibuchi, H. (2020). An easy-to-use real-
world multi-objective optimization problem suite. Ap-
plied Soft Computing, 89:106078.
Tanabe, R. and Li, K. (to appear). Quality indicators
for preference-based evolutionary multi-objective op-
timization using a reference point: A review and anal-
ysis. IEEE Transactions on Evolutionary Computa-
tion. doi: 10.1109/TEVC.2023.3319009.
Wierzbicki, A. P. (1980). The use of reference objectives
in multiobjective optimization. In Multiple Criteria
Decision Making Theory and Application, pages 468–
486. Springer.
Zitzler, E. and Thiele, L. (1998). Multiobjective optimiza-
tion using evolutionary algorithms A comparative
case study. In Parallel Problem Solving from Nature
– PPSN V 5th International Conference, Proceedings,
pages 292–301. Springer.
A Modified Preference-Based Hypervolume Indicator for Interactive Evolutionary Multiobjective Optimization Methods
221