Table 2: Comparison of the algorithms EA, self BEA, and
external BEA applied on 23 functions (see Table 3).
fun. err(GA) err(self BEA) err(ext. BEA)
f1 17.74 ±2.46 17.61 ±1.48 18.62 ±0.92
f2 37.12 ±4.01 40.1 ±13.67 35.12 ±6.57
f3 0.0 ±0.0 0.0 ±0.0 0.0 ±0.0
f4 12.18 ±5.1 28.61 ±8.17 21.31 ±4.36
f5 17e4 ±2.2e4 19e4 ±4.3e4 16e4 ±5.6e4
f6 19.0 ±1.9 18.2 ±2.93 19.0 ±2.19
f7 21.93 ±13.72 20.88 ±11.7 18.07 ±12.52
f8 2.6e3 ±1.6e3 2.9e3 ±0.3e3 2.7e3 ±1.5e3
f9 232.99 ±5.64 232.43 ±9.77 221.45 ±15.15
f10 4.34 ±0.16 13.67 ±7.36 4.12 ±0.14
f11 11.04 ±4.71 102.33 ±8.23 40.7 ±12.94
f12 5.31 ±1.0 6.25 ±1.11 4.94 ±1.21
f13 8.49 ±1.56 6.49 ±0.9 7.35 ±1.56
f14 3.15 ±3.0 3.9 ±5.11 1.97 ±2.42
f15 7e−4 ±3e−4 0.0 ±2e−4 0.0012 ±5e−4
f16 4e−4 ±4e−4 0.0 ±2e−4 4e−4 ±2e−4
f17 1e−4 ±1e−4 0.0 ±1e−4 1e−4 ±2e−4
f18 0.01 ±0.0071 0.0192 ±0.0098 0.0179 ±0.0097
f19 0.0018 ±0.0017 0.001 ±0.0017 0.0 ±0.0015
f20 0.21 ±0.03 0.25 ±0.04 0.17 ±0.06
f21 3.17 ±2.77 4.08 ±3.35 2.43 ±2.29
f22 1.01 ±0.29 1.99 ±2.87 0.75 ±0.22
f23 0.84 ±0.45 2.22 ±2.99 0.81 ±0.3
grids into consideration and found the best parame-
ter set as the minimum rank. For all the presented
algorithms, i.e. EA, self BEA, external BEA, these
optimal parameters were used for a longer test run,
in order to compare them. Over 10 runs, 500 gener-
ations and a population size of 100, we analysed the
mean of the remaining absolute error (cf. Table 2)
and the mean progress over the generations(cf. Fig-
ure 6). As result, for the functions the reference EAs
is better in 4 cases, the self BEAs in 6 cases, and the
external BEAs in 12 cases. The high standard devia-
tion of the algorithms self BEA and external BEA in
Figure 6 indicates a high variance in the population
but also shows that only in the long-term the two new
algorithms are able to beat the reference.
6 CONCLUSIONS
We gave a short recap of the history of EP and pre-
sented a variety of operations of EAs. In perspective
of these, we derived a reference EA for the compar-
ison of two new bet-based EAs, called self-betting
evolutionary algorithms (self BEAs) and external-
betting evolutionary algorithms (external BEAs).
These new bet-based algorithms are able to learn how
to successfully bet, either bet on themselves (self
BEAs) or on an external population (external BEAs).
We presented experiments on 23 high dimensional
test functions and showed that the new algorithms are
able to beat the reference in the majority of all anal-
Table 3: Numerical problems.
Function Dim d Ranges Minimum value
f
1
(x
x
x) =
∑
n
i=1
x
2
i
30 −100 −5.12 ≤x
i
≤ 5.12 f
1
(0
0
0) = 0
f
2
(x
x
x) =
∑
n
i=1
|x
i
|+
∏
n
i=1
x
i
30 −100 −10 ≤ x
i
≤ 10 f
2
(0
0
0) = 0
f
3
(x
x
x) =
∑
n
i=1
(
∑
i
j=1
x
i
)
2
30 −100 −100 ≤x
i
≤ 100 f
3
(0
0
0) = 0
f
4
(x
x
x) = max |x
i
|,0 ≤ i < n 30 −100 −100 ≤x
i
≤ 100 f
4
(0
0
0) = 0
f
5
(x
x
x) =
∑
n−1
i=1
(100 ·(x
i+1
−x
2
i
)
2
+ (x
i
−1)
2
) 30 −100 −30 ≤ x
i
≤ 30 f
5
(1
1
1) = 0
f
6
(x
x
x) =
∑
n
i=1
bx
i
+
1
2
c
2
30 −100 −1.28 ≤x
i
≤ 1.28
f
6
(p
p
p) = 0
−
1
2
≤ p
i
<
1
2
f
7
(x
x
x) = (
∑
n
i=1
(i +1) ·x
4
i
) +rand[0, 1[ 30 −100 −1.28 ≤x
i
< 1.28 f
7
(0
0
0) = 0
f
8
(x
x
x) =
∑
n
i=1
−x
i
·sin(
p
|x
i
|)
30 −100 −500 ≤x
i
≤ 500
f
8
(4
4
42
2
20
0
0.
.
.9
9
96
6
68
8
87
7
7) =
−418.9829 ∗d
f
9
(x
x
x) =
∑
n
i=1
(x
2
i
−10 ·cos (2πx
i
) +10) 30 −100 −5.12 ≤x
i
≤ 5.12 f
9
(0
0
0) = 0
f
10
(x
x
x) = −20 ·exp(−0.2
q
1
n
∑
n
i=1
x
2
i
)−
30 −100 −32 ≤ x
i
≤ 32 f
10
(0
0
0) = 0
exp(
1
n
∑
n
i=1
cos(2πx
i
)) +20 + e
f
11
(x
x
x) =
1
4000
∑
n
i=1
x
2
i
+
∏
n
i=1
cos
x
i
√
i+1
30 −100 −600 ≤x
i
≤ 600 f
11
(0
0
0) = 0
f
12
(x
x
x) =
π
n
10 ·(sin (πy
1
))
2
30 −100 −50 ≤ x
i
≤ 50 f
12
(−
−
−1
1
1) = 0
+
∑
n−1
i=1
(y
i
−1)
2
·
1 + 10
˙
(sin (πy
i+1
))
2
+(y
n
−1)
2
+
∑
n
i=1
u(x
i
,5, 100,4)
f
13
(x
x
x) = 0.1
(sin(3πx
1
))
2
30 −100 −50 ≤ x
i
≤ 50 f
13
(1
1
1) = 0
∑
n−1
i=1
(x
i
−1)
2
·
sin(3πx
i+1
)
2
+ (x
n
−1)
·
1 + (sin (2πx
n
))
2
+
∑
n
i=1
u(x
i
,5, 100,4)
f
14
(x
x
x) =
1
500
+
∑
25
j=1
j +
∑
2
i=1
(x
i
−a
i j
)
6
−1
−1
2 −65.54 ≤x
i
≤ 65.54 f
14
(−
−
−3
3
32
2
2) = 0.9980
f
15
(x
x
x) =
∑
11
i=1
a
i
−
x
1
(b
2
i
+b
i
x
1
)
b
2
i
+b
i
x
2
+x
3
2
4 −5 ≤x
i
≤ 5
f
15
(.19,.19, .12,.14)
= 3.42e−4
f
16
(x
x
x) = 4x
2
0
−2.1x
4
0
+
1
3
x
6
0
+ x
0
x
1
−4x
2
1
+ 4x
4
1
2 −5 ≤x
i
≤ 5
f
16
(0.09,−0.71)
= −1.032
f
17
(x
x
x) = (x
1
−
5.1
4π
2
x
2
0
+
5
π
x
0
−6)
2
2 −5 ≤x
i
≤ 15
f
17
(−3.14,12.26)
+10 ·(1 −
1
8π
) ·cos (x
0
) +10 = 0.398
f
18
(x
x
x) =
1 + (x
0
+ x
1
+ 1)
2
·(19 −14x
0
+ 3x
2
0
−14x
1
2 −2 ≤x
i
≤ 2 f
18
(0,−1) = 3
+6x
0
x
1
+ 3x
2
1
)
·
30 + (2x
0
−3x
1
)
2
·(18 −32x
0
+ 12x
2
0
+ 48x
1
−36x
0
x
1
+ 27x
2
1
)
f
19
= −
∑
4
i=1
c
i
exp
−
∑
3
j=1
a
i j
(x
j
−p
i j
)
2
4 0 ≤x
i
≤ 1
f
19
(.114,.556, .852)
= −3.86
f
20
= −
∑
4
i=1
c
i
exp
−
∑
6
j=1
a
i j
(x
j
−p
i j
)
2
4 0 ≤x
i
≤ 1
f
20
(.201,.150, .477,
.275,.311, .657)
= −3.32
f
21
(x
x
x) = −
∑
5
i=1
(x −a
i
)
T
(x −a
i
) +c
i
−1
4 0 ≤ x
i
≤ 10 f
21
(4
4
4) = −10.15
f
22
(x
x
x) = −
∑
7
i=1
(x −a
i
)
T
(x −a
i
) +c
i
−1
4 0 ≤ x
i
≤ 10 f
22
(4
4
4) = −10.40
f
23
(x
x
x) = −
∑
10
i=1
(x −a
i
)
T
(x −a
i
) +c
i
−1
4 0 ≤ x
i
≤ 10 f
23
(4
4
4) = −10.54
Functions for f
12
, f
13
: Vectors a, b for f
15
:
u(x,u, v,w) =
v(x −u)
w
if x > u
0 if x < −u
v(−x −u)
w
if −u ≤ x ≤ u
a = (.1957,.1947, .1735,.1600, .0844,
.0627,.0456, .0342,.0323, .0235,.0246)
y
i
= 1 +
1
4
(x
i
+ 1) b
−1
i
= (0.25,0.5, 1,2, 4,6, 8,10, 12,14, 16)
2 ×25 matrix a for f
14
:
(a
i j
) =
−32 −16 0 16 32 −32 . .. 0 16 32
−32 −32 −32 −32 −32 −16 . .. 32 32 32
4 ×3 matrices a, p and vector c for f
19
(a
i j
) =
3 10 30
.1 10 35
3 10 30
.1 10 35
(p
i j
) =
.3689 .1170 .2673
.4699 .4387 .7470
.1091 .8732 .5547
.038150 .5743 .8828
c = (1,1.2, 3,3.2)
4 ×6 matrices a, p and vector c for f
20
:
(a
i j
) =
10 3 17 3.5 1.7 8
.05 10 17 .1 8 14
3 3.5 1.7 10 17 8
17 8 .05 10 .1 14
(p
i j
) =
.1312 .1696 .5569 .0124 .8283 .5886
.2329 .4135 .8307 .3736 .1004 .9991
.2348 .1415 .3522 .2883 .3047 .6650
.4047 .8828 .8732 .5743 .1091 .0381
c = (1,1.2, 3,3.2)
10 ×4 matrix a and vector c for f
21
, f
22
, f
23
(a
i j
) =
4 4 4 4
1 1 1 1
8 8 8 8
6 6 6 6
3 7 3 7
2 9 2 9
5 5 3 3
8 1 8 1
6 2 6 2
7 3.6 7 3.6
c = (.1,.2, .2,.4, .4,.6, .3,.7, .5,.5)
ysed test functions. As discussed in (Ghoreishi et al.,
2017), the stopping criterion plays an important role
for the practical applicability of EAs. Since our ex-
periments provide evidence of an advantage of bet-
based EAs over canonical EAs after a fixed number
of function evaluations, an analysis of suitable stop
criteria for bet-based EAs should be investigated in
further research. We assume that for example choos-
ing the k -iteration stop criterion, where the algorithm
stops after a period of k iterations without improve-
ment, the bet-based EAs may use this duration more
efficiently. That is, the dynamic of bets pushes weak
chromosomes on their way to the global optimum and
let them dominate over chromosomes that are trapped
in local optima. Additionally to the latter, for further
research we want to extend the test procedure to learn
how the designed algorithms can be refined. Also,
Bet-based Evolutionary Algorithms: Self-improving Dynamics in Offspring Generation
1197