believe, that technology experts tackle the challenges in each research area separately.
However, each field has today reached a level of maturity shown by the dissemination in
academic and industrial works, and their integration would bring new research insights
and a novel angle in tackling real-world optimization problems with measurement un-
certainty. There has been some research in Constraint Programming (CP) to account for
data uncertainty, and similarly there has been some research in regression modeling to
use optimization techniques.
CP is a paradigm within Artificial Intelligence that proved effective and successful
to model and solve difficult combinatorial search and optimization problems from plan-
ning and resource management domains [19]. Basically it models a given problem as a
Constraint Satisfaction Problem (CSP), which means: a set of variables, the unknowns
for which we seek a value (e.g. how much to order of a given product), the range of
values allowed for each variable (e.g. the product-order variable ranges between 0 and
200), and a set of constraints which define restrictions over the variables (e.g. the prod-
uct order must be greater than 50 units). Constraint solving techniques have been pri-
marily drawn from Artificial Intelligence (constraint propagation and search), and more
recently Operations Research (graph algorithms, Linear Programming). A solution to a
constraint model is a complete consistent assignment of a value to each decision vari-
able.
In the past 15 years, the growing success of constraint programming technology
to tackle real-world combinatorial search problems, has also raised the question of its
limitations to reason with and about uncertain data, due to incomplete or imprecise mea-
surements, (e.g. energy trading, oil platform supply, scheduling). Since then the generic
CSP formalism has been extended to account for forms of uncertainty: e.g. numeri-
cal, mixed, quantified, fuzzy, uncertain CSP and CDF-interval CSPs [7]. The fuzzy and
mixed CSP [11] coined the concept of parameters, as uncontrollable variables, mean-
ing they can take a set of values, but their domain is not meant to be reduced to one
value during problem solving (unlike decision variables). Constraints over parameters,
or uncontrollable variables, can be expressed and thus some form of data dependency
modeled. However, there is a strong focus on discrete data, and the consistency tech-
niques used are not always effective to tackle large scale or optimization problems. The
general QCSP formalism introduces universal quantifiers where the domain of a uni-
versally quantified variable (UQV) is not meant to be pruned, and its actual value is
unknown a priori. There has been work on QCSP with continuous domains, using one
or more UQV and dedicated algorithms [2, 5, 18]. Discrete QCSP algorithms cannot
be used to reason about uncertain data since they apply a preprocessing step enforced
by the solver QCSPsolve [12], which essentially determines whether constraints of
the form ∀X, ∀Y, C(X, Y ), and ∃Z, ∀Y, C(Z, Y ), are either always true or false for all
values of a UQV. This is a too strong statement, that does not reflect the fact that the
data will be refined later on and might satisfy the constraint.
Example 1. Consider the following constraint over UQV:
∀X ∈ {1, 2, 3}, ∀Y ∈ {0, 1, 2}, X ≥ Y
Using QCSPsolve and its peers, this constraint would always be false since the
possible parameter instance (X = 1, Y = 2) does not hold. However all the other
73
El MUNDO: Embedding Measurement Uncertainty in Decision Making and Optimization
73