follows. Note, however, that in order to understand
the linear programming problems very little is lost if
one just considers the E
a,y
C,x
ν
i
as appropriate constants
needed to interpolate inequality conditions from ver-
tices over simplices and faces and of simplices, with-
out following the details. Computing the E
a,y
C,x
ν
i
al-
gorithmically is simple given some upper bounds on
the second order derivatives of the components of the
vector fields f
a
.
We are now ready to state our linear programming
problem to compute CPA Lyapunov functions. We
first state it as in (Baier et al., 2010; Baier et al., 2012)
and then outline a proof of why a feasible solution to it
delivers a CPA Lyapunov function for the differential
inclusion used in its construction. From the proof it
becomes clear what conditions are unnecessary when
we move from the differential inclusion (2) to the ar-
bitrary switched system (1). In Section 3.2 we then
discuss how the removing of constraints can be algo-
rithmically implemented.
3.1 The Linear Programming Problem
Consider a triangulation T as in Section 2 and,
adapted to the triangulation, the differential inclusion
(2) and the corresponding arbitrary switched system
(1). Assume that the equilibrium in question is at the
origin. For every simplex S
ν
∈ T let
C
ν
= {x
ν
0
,x
ν
1
,...,x
ν
n
}
denote its vertices, i.e. S
ν
= coC
ν
. Assume that for
every S
ν
= coC
ν
∈ T and every
/
0 6= C ⊂C
ν
and every
a ∈ A
ν
we have an upper bound B
a
C,r,s
as in (11) and
that we have fixed a vertex y of C for the definition
of E
a,y
C,x
i
. If 0 ∈ C we must choose y = 0 to avoid un-
satisfiable constraints. Note that the sets coC, where
/
0 6= C ( C
ν
, are the faces of the simplex S
ν
.
The variables of the linear programming problem
are V
x
for every x that is a vertex of a simplex in T ,
i.e. x ∈ V
T
. From a feasible solution, where the vari-
ables V
x
have been assigned values such that the lin-
ear constraints below are fulfilled, we then define a
continuous function V : D
T
→ R through parameteri-
zation using these values: for an x ∈ D
T
we can find a
simplex S
ν
= co{x
ν
0
,x
ν
1
,...,x
ν
n
} such that x ∈ S
ν
and x
has a unique representation x =
∑
n
i=0
λ
i
x
ν
i
as a convex
sum of the vertices. For x we define
V (x) :=
n
∑
i=0
λ
i
V
x
ν
i
.
If two different simplices in T intersect they do so in
a common face, hence V is well-defined and continu-
ous. By a slight abuse of notation we both write V (x
ν
i
)
for the variable V
x
ν
i
of the linear programming prob-
lem and the value of the function V at x
ν
i
, since after
we have assigned a numerical value to the former it is
the value of the function V at x
ν
i
.
There are two groups of constrains in the linear
programming problem. The first group is to assert
that V has a minimum at the origin:
Linear Constraints L1
If 0 ∈ V
T
one sets V (0) = 0. Then for all x ∈ V
T
:
V (x) ≥ kxk
2
.
Another possibility is to relax the condition of
strong asymptotic stability of the origin to practical
strong asymptotic stability. In this case one prede-
fines an arbitrary small neighbourhood of the origin
N and does not demand that V is decreasing along
solution trajectories in this set. One must then make
sure through constraints that
max
x∈∂N
V (x) < min
x∈∂D
T
V (x),
because sublevel sets of V that are closed in D
◦
T
are
lower bounds on the basin of attraction. This is not
difficult to implements and is discussed in detail in
e.g. (Hafstein, 2004; Hafstein, 2007; Baier et al.,
2012; Hafstein et al., 2015). In short, the implica-
tions of such a Lyapunov function are that solutions
enter N in a finite time and either stay in N or stay
close and enter it repeatedly.
The second group of linear constraints is to assert
that V is decreasing along all solution trajectories.
The simplest case is when A
ν
= A for all ν ∈ T and
then the appropriate constraints are:
Linear Constraints L2 (Simplest Case)
For every S
ν
∈ T , we demand for every a ∈ A
ν
and
i = 0, 1,...,n that:
∇V
ν
•
f
a
(x
ν
i
) + k∇V
ν
k
1
E
a,y
C
ν
,x
ν
i
≤ −kx
ν
i
k
2
. (13)
In the case of practical strong asymptotic stabil-
ity one disregards the constraints (13) for S
ν
⊂ N .
Note that the constrains (13) are linear in the vari-
ables V (x
ν
i
), cf. e.g. (Giesl and Hafstein, 2014, Re-
marks 9 and 10), in particular k∇V
ν
k
1
can be mod-
elled through linear constraint using auxiliary vari-
ables.
Now, let us consider how one uses the con-
straints (13) to show that D
+
V (x, f
a
(x)) ≤ −kxk
2
. By
(Marin
´
osson, 2002, Lemma 4.16) we have for a ∈ A
ν
and x =
∑
n
i=0
λ
i
x
ν
i
∈ S
ν
,
∑
n
i=0
λ
i
= 1, that
f
a
(x) −
n
∑
i=0
λ
i
f
a
(x
ν
i
)
∞
≤
n
∑
i=0
λ
i
E
a,y
C
ν
,x
ν
i
. (14)
CPA Lyapunov Functions: Switched Systems vs. Differential Inclusions
749