In lines 5-8, there are two nested loops which
leads to a complexity class of O(n ∗ (k + 1)). In line
10, there is an inclusion into a sorted list with the
maximum length of n − 1 if all other variables have
been included before. The other efforts in lines 1-17
can be ignored for the complexity class, the complex-
ity is in O(n ∗ (k + 1 + log(n))) for this part.
The getNextDomain method has a complexity of
n in worst case if the next domain is D
i
with D
i
≥
D
j
|∀ j ∈ {1, ..., n}, j 6= i where x
i
is the only variable
which is not part of sub-CSP P
l
. In this case the while
loop in lines 20-23 will be ignored because after this
all variables are part of the sub-CSP P
l
, currentSize =
size(P) and logicaly s is chosen smaller as size(P).
If the loop is not ignored then the effort of
getNextDomain and the loop is accumulated in O(n)
in the worst case. This leads to a complexity of
O(n ∗ (k + 1 + log(n)) + n) ∈ O(n ∗ (k + 1 + log(n)))
for the calculateValueSubCsp algorithm.
The cspOverlapSplit algorithm is partitioned into
two parts. The first part (lines 1-6) has two nested
loops, where the first is repeated |X
0
| ≤ |C| = m times.
The inner loop is repeated vars(c
i
) ≤ |X| = n times.
This leads to the complexity class O(m ∗ n) for the
first part.
The second part repeats k times the
calculateValueSubCsp algorithm, so it is in
O(k ∗ (n ∗ (k + 1 + log(n)))).
Finally, the overall complexity is in O(n ∗ m + n ∗
k ∗ (k + 1 + log(n))) ∈ O(n ∗ (m + k
2
+ k ∗ log(n))).
Depending on log(n) or k is bigger, this can be re-
duced to O(n ∗ (m + log
2
(n))) or O(n ∗ (m + k
2
)).
For a static number of sub-CSPs k, the algorithm
can be propagated in quadratical time, which is an ac-
ceptable time for such constraints and which is in the
same complexity class like the arc consistency prop-
agation algorithm of the allDifferent constraint or the
propagator of the cumulative constraint.
3.4 Integration of the Algorithm into
the Regularization Process
This section explains where we use the Meta CSOP
problem in the regularization process.
Figure 1 shows the regularization process from a
general CSP P to an optimized rCSP P
00
reg
. It starts
with a general CSP P = (X, D,C) which have different
kinds of constraints c ∈ C like count, allDifferent or
sum, then P will be transformed into a rCSP P
reg
. This
can be realized by direct transformations as explained
in (L
¨
offler et al., 2018) or with the detection of all
solutions of a sub-CSP of P which will be convert to
a regular constraint.
After getting the rCSP P
reg
we optimize it (P
0
reg
) by
the intersection and minimization of the DFAs which
are used in the regular constraints. By doing this, we
reduce the number of constraints in P
reg
; therefore,
some potential fails and unwanted redundancy are re-
moved.
Unwanted redundancy means implicit redundancy
between two (or more) constraints which leads to a
slow down of the solving process.
Do not confound this kind of redundancy with the
positive redundancy which is often called in the lit-
erature. We expect that not such positive redundancy
constraints were added to the model before so that all
the negative redundancy can be removed first then we
add the positive redundancy. The interplay of positive
and negative redundancy would also be an exciting
research topic.
These optimizations can be repeated until only
one regular constraint is left (P
00
reg
) but these may not
be useful. The optimization steps are also very time-
consuming. Reducing the rCSP to one with only one
regular constraint needs maybe more time as solving
the original CSP P. Mostly, it is very time improv-
ing to do some optimization steps and solve the rCSP
then. To find the perfect point until which the opti-
mization is useful or not is also one of our research
areas and will be answered in the future.
The presented Meta CSOP P
opt
is part of the op-
timization steps of the algorithm. It detects the con-
straints which should be combined (intersected) next.
Using P
opt
with a maximum sub-CSP size s lower as
the size of the given CSP (size(P)) leads automati-
cally to a stopping point in the optimization which is
reached before all constraints are combined into one
regular constraint.
Remark: the algorithm is not only but especially
useful for regular constraints. It is also possible to
substitute constraints for the table constraint or a set
of different types of constraints.
4 COMBINING OF REGULAR
CONSTRAINTS
This section explains how we combine two (or more)
regular constraints into a new one.
We assume that we have a orderd rCSP P
reg
=
{X, D,C} like explained in 2.1. Given are two
regular constraints c
1
= regular(X
1
, M
1
) and c
2
=
regular(X
2
, M
2
) with c
1
, c
2
∈ C and the variables
X
1
⊆ X and X
2
⊆ X where ∀k ∈ {1, 2} ∀i, j ∈
{1, ..., |X
k
|}, i < j|x
i
occur earlier in c
k
as x
j
.
There are two cases. Case one: the two constraints
cover the same set of variables (X
1
= X
2
). Case two:
A Meta Constraint Satisfaction Optimization Problem for the Optimization of Regular Constraint Satisfaction Problems
439