In each step, when a variable is chosen, its local
preference is computed by setting all the missing pref-
erences to the preference value 1. To choose the new
value for the selected variable, we compute the pref-
erences of the assignments obtained by choosing the
other values for this variable. Since some preference
values may be missing, in computing the preference
of a new assignment we just consider the preferences
which are known at the current point. We then choose
the value which is associated to the best new assign-
ment. If two values are associated to assignmentswith
the same preference, we choose the one associated to
the assignment with the smaller number of incomplete
tuples. In this way, we aim at moving to a new assign-
ment which is better than the current one and has the
fewest missing preferences.
Since the new assignment, say s
′
, could have in-
complete tuples, we ask the user to reveal enough of
this data to compute the actual preference of s
′
. We
call ALL the elicitation strategy that elicits all the
missing preferences associated to the tuples obtained
projecting s
′
on the constraints, and we call WORST
the elicitation strategy that asks the user to reveal only
the worst preference among the missing ones, if it is
less than the worst known preference. This is enough
to compute the actual preference of s
′
since the prefer-
ence of an assignment coincides with the worst pref-
erence in its constraints.
As in many classical local search algorithms, to
avoid stagnation in local minima, we employ tabu
search and random moves. Our algorithm has two
parameters: p, which is the probability of a random
move, and t, which is the tabu tenure. When we have
to choose a variable to re-assign, the variable is either
randomly chosen with probability p or, with proba-
bility (1-p) and we perform the procedure described
above. Also, if no improving moveis possible, i.e., all
new assignments in the neighborhood are worse than
or equal to the current one, then the chosen variable is
marked as tabu and not used for t steps.
During search, the algorithm maintains the best
solution found so far, which is returned when the
maximum number of allowed steps is exceeded. We
will show later that, even when the returned solution
is a necessarily optimal solution, its quality is not very
far from that of the necessarily optimal solutions.
4 EXPERIMENTAL RESULTS
The test sets for IFCSPs are created using a gener-
ator that has the following parameters: n: number
of variables; m: cardinality of the variable domains;
d: density, i.e., the percentage of binary constraints
present in the problem w.r.t. the total number of pos-
sible binary constraints that can be defined on n vari-
ables; t: tightness, i.e., the percentage of tuples with
preference 0 in each constraint and in each domain
w.r.t. the total number of tuples; i: incompleteness,
i.e., the percentage of incomplete tuples (i.e., tuples
with preference ?) in each constraint and in each do-
main. Our experiments measure the percentage of
elicited preferences (overall the missing preferences),
the solution quality (as the normalized distance from
the quality of necessarily optimal solutions), and the
execution time, as the generation parameters vary.
We first considered the quality of the returned so-
lution. To do this, we computed the distance be-
tween the preference of the returned solution and that
of the necessarily optimal solution returned by algo-
rithm FBB (which stands for fuzzy branch and bound)
which is one of the best algorithms in (Gelain et al.,
2010a). In (Gelain et al., 2010a), this algorithm cor-
responds to the one called DPI.WORST.BRANCH.
Such a distance is measured as the percentage over
the whole range of preference values. For example, if
the preference of the solution returned is 0.4 and the
one of the solution given by FBB is 0.5, the prefer-
ence error reported is 10%. A higher error denotes a
lower solution quality.
Figures 1(a) and 1(b) show the preference er-
ror when the number of variables and tightness vary
(please notice that the y-axis ranges from 0% to 10%).
We can see that the error is always very small and its
maximum value is 3.5% when we consider problems
with 20 variables. In most of the other cases, it is be-
low 1.5%. We also can notice that the solution quality
is practically the same for both elicitation strategies.
If we look at the percentage of elicited prefer-
ences (Figures 1(c) and 1(d)), we can see that the
WORST strategy elicits always less preferences than
ALL, eliciting only 20% of incomplete preferences in
most of the cases. The FBB algorithm elicits about
half as many preferences as WORST. Thus, with 10
variables, FBB is better than our local search ap-
proach, since it guarantees to find a necessarily opti-
mal solution while eliciting a smaller number of pref-
erences.
We also tested the WORST strategies varying the
number of variables from 10 to 100. In Figure 1(f) we
show how the elicitation varies up to 100 variables. It
is easy to notice that with more than 70 variables the
percentage of elicited preferences decreases. This is
because the probabilityof a complete assignmentwith
a 0 preference arises (since density remains the same).
Moreover, we can see how the local search algorithms
can scale better than the branch and bound approach.
In Figure 1(e) the FBB reaches a time limit of 10 min-
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
584