Table 1: Estimation results after 24 evaluations of a convex
function.
Iterations 6 12 18 24
EGO 36.40 23.42 13.65 6.74
3:1 36.40 23.42 13.65 5.36
1:1 36.40 23.42 11.90 5.48
1:3 36.40 20.79 12.87 5.01
MPI 36.72 20.84 6.44 4.13
Table 2: Estimation results after 48 iterations of a convex
function.
Iterations 12 24 36 48
EGO 23.42 6.74 3.33 1.33
3:1 23.42 6.74 3.33 1.39
1:1 23.42 6.74 3.49 2.28
1:3 23.42 5.48 2.53 1.32
MPI 20.84 4.13 3.34 2.38
3.3 Experimental Results
The performance of our algorithm was tested on the
convex function α
∑
5
i=1
x
2
i
, where α = 1/2. This five
dimensional function is continuous, bounded, and
convex. Each dimension of the input space is bounded
by [−10, 10].
The function was optimized 25 times and the
mean and variance of the current best solution was
computed at four time points (after 6, 12, 18, 24 itera-
tions, or after 12, 18, 36, and 48 iterations). Given an
initial random design of eight points, twenty or fifty
additional points were iteratively selected and eval-
uated by the MPI, EGO, and mixed acquisition algo-
rithms. To maximize the log marginal likelihoods, the
hyperparameters adopted in these experiments were
selected online.
We tested five models with different EI to PI ratios
(1:0, 3:1, 1:1, 1:3, and 0:1). The first and last of these
models are equivalent to EGO, and the P-algorithm,
respectively, while the ratios of 3:1, 1:1, and 1:3 are
denoted as 3:1, 1:1, and 1:3 respectively. Specifically,
we compared the performances of the mixed and stan-
dard acquisition functions.
Table 1 shows the case of 24 evalutions, where the
ratio of 1:3 yields a better solution than MPI after 12
iterations, but MPI obtains the best solution after 18
iterations.
In the 48 evaluation case, Table 2 shows that the
P-algorithm performs well in the first half, while the
ratio of 1:3 yields the best solution among the other
algorithms.
The results suggest that the greediness of the P-
algorithm is beneficial for optimizing convex func-
tions within a small number of iterations. When
the current best solution approximates the local op-
timum, the P-algorithm yields the strongest improve-
ment. Conversely, when the number of iterations is
increased, the EGO method enables efficient search-
ing. Therefore, the acquisition functions should be
selected based on the number of iterations.
After 36 iterations, the performance of the ratio
of 1:3 is clearly superior to that of the other algo-
rithms. Changing the acquisition function to improve
the searching strategy yields better optimization re-
sults than Bayesian optimization algorithms using a
single acquisition function.
4 CONCLUSION
We demonstrated that to obtain the best global op-
timal by Gaussian regression, the ratio of EI to PI
should be adapted to the number of iterations. At
some ratios, the combined approach yields superior
results to single acquisition functions, at other ra-
tios, MPI and EGO yields superior results. The time
point of switching the acquisition functions is unde-
termined. We selected the ratio that improves the cur-
rent best solution for a given objectivefunction within
a limited number of evaluations.
The GP-Hedge algorithm selects acquisition func-
tions for searching the next point by a bandit ap-
proach (Hoffman et al., 2011). Optimizing the pro-
posed method under limited evaluation conditions is
left for future research.
REFERENCES
Brochu, E., Cora, V. M., and De Freitas, N. (2010). A tuto-
rial on bayesian optimization of expensive cost func-
tions, with application to active user modeling and
hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599.
Hoffman, M. D., Brochu, E., and de Freitas, N. (2011).
Portfolio allocation for Bayesian optimization. In
UAI, pages 327–336. Citeseer.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998).
Efficient global optimization of expensive black-box
functions. J. of Global Optimization, 13(4):455–492.
Kushner, H. J. (1964). A new method of locating the max-
imum point of an arbitrary multipeak curve in the
presence of noise. Journal of Fluids Engineering,
86(1):97–106.
Martin, J. D. and Simpson, T. W. (2003). A study on the
use of kriging models to approximate deterministic
computer models. In ASME 2003 International De-
sign Engineering Technical Conferences and Comput-
ers and Information in Engineering Conference, pages
567–576. American Society of Mechanical Engineers.
GlobalOptimizationwithGaussianRegressionUndertheFiniteNumberofEvaluation
197