(Henrion and Strugarek, 2008), (Pr
´
ekopa et al., 2011)
as recently published papers. These results are sim-
plified either by restricting consideration to one-row
problem only, or by assuming independence of ma-
trix rows. In our paper we demonstrate the use of
copula theory to deal with dependence of rows in (2).
This was done first by (Henrion and Strugarek, 2011)
for the set M(p) using a class of so-called logexp-
concave copulas. We extend their results to another
large, more usual class of copulas and formulate an
equivalent description of the problem (2) convenient
to be solved by methods of second-order cone pro-
gramming.
2 DEPENDENCE
2.1 Basic Facts about Copulas
Theory of copulas is well known for the people of
probability theory and mathematical statistics but, to
our knowledge, was not used up to these days in
stochastic programming to describe the structure of
the problem. In this section, we mention only some
basic facts about copulas necessary for our following
investigation. Most of the notions here (up to Propo-
sition 2.7) were taken from the book (Nelsen, 2006).
Definition 2.1. A copula is the distribution function
C : [0; 1]
K
→ [0;1] of some K-dimensional random
vector whose marginals are uniformly distributed on
[0;1].
Proposition 2.2 (Sklar’s Theorem). For any K-
dimensional distribution function F : R
K
→ [0; 1]
with marginals F
1
,...,F
K
, there exists a copula C
such that
∀z ∈ R
K
F(z) = C(F
1
(z
1
),.. . , F
K
(z
K
)). (5)
If, moreover, F
k
are continuous, then C is uniquely
given by
C(u) = F(F
−1
1
(u
1
),.. . , F
−1
K
(u
K
)). (6)
Otherwise, C is uniquely determined on range F
1
×
···×rangeF
K
.
Through Sklar’s Theorem, we have in hand an ef-
ficient general tool for handling an arbitrary depen-
dence structure. First, if we know the marginal dis-
tributions F
k
together with the copula representing
the dependence we can unambiguously determine the
joint distribution. On the other hand, the copula can
be uniquely derived from the knowledge of the joint
and all marginal distributions. Our first example is the
independent (product) copula which is nothing else
than the independence formula for distribution func-
tions:
C
Π
(u) =
∏
k
u
k
. (7)
The second important example is the Gaussian cop-
ula which is given by Sklar’s Theorem applied to a
joint normal distribution and its normally distributed
marginals:
C
Σ
(u) = Φ
Σ
(Φ
−1
(u
1
),.. . , Φ
−1
(u
K
)) (8)
where Φ
Σ
is the distribution function of the multivari-
ate normal distribution with zero mean, unit variance
and covariance matrix Σ, and Φ
−1
(u
k
) are standard
one-dimensional normal quantiles. For illustration
purposes, we provide a set of figures (Figures 1–5)
of some popular copulas. From the left-hand side, the
reader can always find the distribution function of the
copula (i. e., the copula itself), its density, and the den-
sity of the distribution given by the copula applied to
the standard normal marginals. Figure 1 represents
the independent copula; compare it to the Gaussian
copula in Figure 2. Note that the Gaussian copula
is the only copula that can represent the joint normal
distribution.
The following proposition provides the limits in
which the copulas can be located.
Proposition 2.3 (The Fr
´
echet-Hoeffding bounds).
Every copula C satisfies the inequalities
W (u) ≤C(u) ≤C
M
(u) (9)
where
W (u) := max
∑
u
k
−K + 1, 0
,
C
M
(u) := min
k
{u
k
}.
The function W represents the completely nega-
tive dependence between marginal distributions, but
it is known not to be a copula if K > 2. C
M
represents
the completely positive dependence and it is known
under the name of the comonotone (maximum) cop-
ula. These functions together with the independent
copula are often found to be limiting cases of some
other classes of copulas.
The Gaussian copula has a rather complicated
structure (even it is not analytic) to be treated directly
in our optimization problems. Instead, we need a dif-
ferent, simpler class of copulas, which we found in
so-called Archimedean copulas.
Definition 2.4. A copula C is called Archimedean if
there exists a continuous strictly decreasing function
ψ : [0; 1] → [0;+∞], called generator of C, such that
ψ(1) = 0 and
C(u) = ψ
−1
n
∑
i=1
ψ(u
i
)
!
. (10)
OntheUseofCopulasinJointChance-constrainedProgramming
73