Local Lyapunov Functions for Nonlinear Stochastic
Differential Equations by Linearization
Hj
¨
ortur Bj
¨
ornsson
1
, Peter Giesl
2
, Skuli Gudmundsson
3
and Sigurdur Hafstein
1
1
Science Institute and Faculty of Physical Sciences, University of Iceland, Dunhagi 5, 107 Reykjav
´
ık, Iceland
2
Department of Mathematics, University of Sussex, Falmer, BN1 9QH, U.K.
3
Svensk Exportkredit, Klarabergsviadukten 61-63, 11164 Stockholm, Sweden
Keywords:
Stochastic Differential Equation, Lyapunov Function, Linearization, Asymptotic Stability in Probability.
Abstract:
We present a rigid estimate of the domain, on which a Lyapunov function for the linearization of a nonlinear
stochastic differential equation is a Lyapunov function for the original system. By using this estimate the
demanding task of computing a lower bound on the γ-basin of attraction for a nonlinear stochastic systems is
greatly simplified and the application of a resent numerical method for the same purpose facilitated.
1 INTRODUCTION
When analysing the stability of an equilibrium of a
nonlinear deterministic system
˙
x = f(x), f : R
d
R
d
,
one often resorts to linearization around the equilib-
rium. Assuming, without restriction of generality, that
the equilibrium in question is at the origin, then one
analyzes the stability of the origin for the system
˙
x =
Ax, where A := Df(0) is the Jacobian of f at the origin.
Now, if the matrix A is Hurwitz, i.e. the real-parts of
the eigenvalues of A are all strictly negative, then one
can solve the Lyapunov equation A
>
P + PA = Q,
where Q R
d×d
is an arbitrary symmetric and pos-
itive definite matrix. The solution P R
d×d
is then
symmetric and positive definite and V (x) = x
>
Px is
a Lyapunov function for the system, i.e. V has a min-
imum at the equilibrium at the origin and the deriva-
tive of V along solution trajectories of the linearized
system fulfills
V (x)
Ax = x
>
Qx
and is thus negative on R
d
\{0}. The function V will
also be a Lyapunov function for the original nonlinear
system
˙
x = f(x) on a neighbourhood N of the origin
where
V
0
(x) = V (x)
f(x) < 0 for x N \{0}.
Here V
0
denotes the orbital derivative of the system.
The size of the set N is of great importance because
compact sublevel sets of V that are within N are
lower bounds on the equilibrium’s basin of attraction,
i.e. the set of points which converge to the equilibrium
as time goes to infinity. Explicit bounds for the size of
N are quite easily derived, cf. e.g. (Hafstein, 2004).
In this paper we will derive such an estimate, but for
the considerably more demanding case of stochastic
differential equations.
Notation: We denote by kxk the Euclidian norm
of a vector x R
d
and for A R
d×d
by kAk =
max
kxk=1
kAxk the matrix norm induced by the Eu-
clidian vector norm. Vectors are assumed to be col-
umn vectors.We denote by κ(A) := kAkkA
1
k the
condition number with respect to the k · k norm of
the nonsingular matrix A R
d×d
. For a symmet-
ric and positive definite Q R
d×d
we define the en-
ergetic norm kxk
Q
:=
p
x
>
Qx and the correspond-
ing induced matrix norm kAk
Q
:= max
kxk
Q
=1
kAxk
Q
.
Recall that a symmetric and positive definite Q
R
d×d
can be factorized as Q = ODO
>
where O
R
d×d
is orthogonal, i.e. O
>
O = O
>
O = I and D =
diag(λ
1
, λ
2
, . . ., λ
d
) R
d×d
is a diagonal matrix with
0 < λ
1
λ
2
. . . λ
d
. For every a R we define
the matrix Q
a
= O diag(λ
a
1
, λ
a
2
, . . . , λ
a
d
)O
>
. It is not
difficult to see that for a > 0 we have kQ
a
k = λ
a
d
and
kQ
a
k = λ
a
1
. Further,
kQ
1
2
k
1
kxk kxk
Q
=
p
x
>
Qx
= kQ
1
2
xk kQ
1
2
kkxk.
We consider d-dimensional systems and in all sums
where the upper and lower bounds of the sum are
omitted they are assumed to be 1 and d respectively,
i.e.
i
:=
d
i=1
,
i, j
:=
d
i, j=1
etc.
A function α : R
+
R
+
is said to be of class K
Björnsson, H., Giesl, P., Gudmundsson, S. and Hafstein, S.
Local Lyapunov Functions for Nonlinear Stochastic Differential Equations by Linearization.
DOI: 10.5220/0006944505790586
In Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2018) - Volume 1, pages 579-586
ISBN: 978-989-758-321-6
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
579
if it is continuous, monotonically increasing, α(0) =
0, and lim
x
α(x) = .
We write and for probability and expecta-
tion respectively. The underlying probability spaces
should always be clear from the context. The abbrevi-
ation a.s. stands for almost surely, i.e. with probability
one, and
a.s.
= means equal a.s.
2 THE PROBLEM SETTING
We give a short discussion of the setup and the
problem at hand. For a more detailed discussion
of the setup see (Gudmundsson and Hafstein, 2018,
§2). The general d-dimensional stochastic differen-
tial equation (SDE) of It
ˆ
o type we consider is of the
form:
dX(t) = f(X(t))dt + g(X(t)) ·dW(t) (1)
or equivalently
dX
i
(t) = f
i
(X(t))dt +
U
u=1
g
u
(X(t)) ·dW
u
(t)
for i = 1, 2, . . . , d. Thus f = ( f
1
, f
2
, . . . , f
d
)
>
, g =
(g
1
, g
2
, . . . , g
U
), and g
u
= (g
u
1
, g
u
2
, . . . , g
u
d
)
>
, where
f
i
, g
u
i
: R
d
R. We assume that the origin is an
equilibrium of the system, i.e. f(0) = 0 and g
u
(0) =
0 for u = 1, 2, . . . ,U and we consider strong solu-
tions to (1). For deterministic initial value solutions,
i.e. X(0) = x R
d
a.s., we write X
x
for the solution,
i.e.
X
x
(t) = x +
Z
t
0
f(X(s))ds +
Z
t
0
g(X(s))dW(s),
where the second integral is interpreted in the It
ˆ
o
sense. As shown in (Mao, 2008) it suffices to consider
deterministic initial value solutions when studying the
stability of an equilibrium.
Numerous concepts are in use concerning the
stability of equilibria of SDEs. Here we will be
concerned with the so-called asymptotic stability in
probability of the zero solution (Khasminskii, 2012,
(5.15)), also referred to as stochastic asymptotic sta-
bility (Mao, 2008, Definition 4.2.1). For a more
detailed discussion of the stability of SDEs see the
books by Khasminskii (Khasminskii, 2012) or Mao
(Mao, 2008). We recall a few definitions:
Definition 2.1 (Stability in Probability (SiP)). The
null solution X(t)
a.s.
= 0 to the SDE (1) is said to be sta-
ble in probability (SiP) if for every r > 0 and 0 < ε < 1
there exists a δ > 0 such that :
kxk δ implies
sup
t0
kX
x
(t)k r
1 ε.
Definition 2.2 (Asymptotic Stability in Probability
(ASiP)). The null solution X(t)
a.s.
= 0 to the SDE (1) is
said to be asymptotically stable in probability (ASiP)
if it is SiP and in addition for every 0 < ε < 1 there
exists a δ > 0 such that :
kxk δ implies
n
lim
t
kX
x
(t)k = 0
o
1 ε.
Our definitions of SiP and ASiP are equivalent to
the more common
lim
kxk→0
{
sup
t>0
kX
x
(t)k r
}
= 1 for all r > 0
for SiP and additionally
lim
kxk→0
limsup
t
kX
x
(t)k = 0
= 1
for ASiP, which can be seen by fixing r > 0 and writ-
ing down the definition of a limit: for every ε > 0
there exists a δ > 0.
The reason for our formulation is that we want to
look at a more practical concept related to such sta-
bility, namely a stochastic analog of the basin of at-
traction (BOA) in the stability theory for deterministic
systems, cf. (Gudmundsson and Hafstein, 2018). In-
stead of the limit kxk 0 we consider: Given some
confidence 0 < γ 1 how far from the origin can sam-
ple paths start and still approach the equilibrium as
t with probability greater than or equal to γ. This
is the motivation for the next definition.
Definition 2.3 (γ-Basin Of Attraction (γ-BOA)).
Consider the system (1) and let 0 < γ 1. We refer to
the set
n
x R
d
:
n
lim
t
kX
x
(t)k = 0
o
γ
o
(γ-BOA)
as the γ-basin of attraction, or short γ-BOA, of the
equilibrium at the origin.
Note that a 1-BOA corresponds to the usual BOA
for deterministic systems.
For the SDE (1) the associated generator is given
by
LV (x) := (2)
V (x)
f(x) +
1
2
i, j
g(x)g(x)
>
i j
2
V
x
i
x
j
(x)
for some appropriately differentiable V : U R with
U R
d
. Notice that this is just the drift term in the
expression for the stochastic differential of the pro-
cess t 7→V (X(t)). The generator for a stochastic sys-
tem corresponds to the orbital derivative of a deter-
ministic system.
CTDE 2018 - Special Session on Control Theory and Differential Equations
580
Definition 2.4 (Local Lyapunov function). Consider
the system (1). A function V C(U) C
2
(U \{0}),
where 0 U R
d
is a domain, is called a (local)
Lyapunov function for the the system (1) if there are
functions µ
1
, µ
2
, µ
3
K
, such that V fulfills the prop-
erties :
(i) µ
1
(kxk) V (x) µ
2
(kxk) for all x U
(ii) LV (x) µ
3
(kxk) for all x U \{0}
Remark 2.5. It is of vital importance that V is not
necessarily differentiable at the equilibrium, because
otherwise a large number of systems with an ASiP
null solution do not possess a Lyapunov function,
cf. (Khasminskii, 2012, Remark 5.5).
The following theorem provides the first center-
piece of Lyapunov stability theory for our application,
cf. (Khasminskii, 2012, Theorem 5.5 and Corollary
5.1):
Theorem 2.6. If there exists a local Lyapunov func-
tions as in Definition 2.4 for the system (1), then the
null solution is ASiP. Further, let V
max
> 0 and as-
sume that V
1
([0,V
max
]) is a compact subset of U.
Then, for every 0 < β < 1 the set V
1
([0, βV
max
]) is a
subset of the (1 β)-BOA of the origin.
This concludes our discussion of the setup. In
the next section we discuss Lyapunov functions for
the linearization of (1) and prove the main contribu-
tion of this paper, a lower bound on the area where a
Lyapunov function for the linearization is also a Lya-
punov function for the nonlinear system.
3 MAIN RESULTS
We now consider the linearization of system (1). A
Lyapunov function for the linearized system can then
be constructed, e.g. with the method form (Hafstein
et al., 2018), much more easily than for the nonlinear
system (1). In addition to f and g satisfying the usual
sufficient SDE solution-theory conditions locally Lip-
schitz and the linear-growth conditions, cf. e.g. (Mao,
2008, §2.3) or (Kallenberg, 2002, §21), we assume f
and g are C
2
on a convex neighbourhood U R
d
of
the origin. The second order Taylor expansion for the
components f
i
of f at x U reads
f
i
(x) =
j
x
j
F
i j
+
1
2
j,k
x
j
x
k
R
i
jk
(x)
=
Fx)
i
+
1
2
x
>
R
i
(x) x,
and the components g
u
i
of g
u
,
g
u
i
(x) =
j
x
j
G
u
i j
+
1
2
j,k
x
j
x
k
R
ui
jk
(x)
=
G
u
x)
i
+
1
2
x
>
R
ui
(x) x
Here
F = (F
i j
)
i, j
R
d×d
with F
i j
=
j
f
i
(0)
and
G
u
=
G
u
i j
i, j
R
d×d
with G
u
i j
=
j
g
u
i
(0)
and the matrices R
i
(x) and R
ui
(x) are the Taylor re-
mainders
R
i
(x) =
R
i
jk
(x)
j,k
R
d×d
and
R
ui
(x) =
R
ui
jk
(x)
j,k
R
d×d
.
By abuse of notation we define the elements of up-
per bound matrices R
i
=
R
i
jk
j,k
R
d×d
and R
ui
=
R
ui
jk
j,k
R
d×d
as follows:
2
jk
f
i
(x)
= |R
i
jk
(x)| R
i
jk
and (3)
2
jk
g
u
i
(x)
= |R
ui
jk
(x)| R
ui
jk
, (4)
for all x N , where N is a neighbourhood of the
origin to be defined later. Finally we fix the constants
R
i
and R
ui
as
R
i
:= kR
i
k and R
ui
:= kR
ui
k. (5)
The action of the generator (2) of the system (1)
on some V C(U)C
2
(U \{0}) can be written as
LV (x) =
1
2
i, j
m
i j
(x)
2
i j
V (x) +
i
f
i
(x)
i
V (x)
= L
0
V (x) +E(x)
where L
0
V (x) is the generator of the linearized sys-
tem defined below and E(x) the rest (containing all
the Taylor remainders). We will now work out the
details, first notice that:
m
i j
(x) =
U
u=1
g
u
i
(x)g
u
j
(x)
=
k,l
x
k
x
l
U
u=1
G
u
ik
G
u
jl
+
1
2
k,l,m
x
k
x
l
x
m
U
u=1
G
u
ik
R
u j
lm
(x) + G
u
jk
R
ui
lm
(x)
+
1
4
k,l,m,n
x
k
x
l
x
m
x
n
U
u=1
R
ui
kl
(x)R
u j
mn
(x).
Local Lyapunov Functions for Nonlinear Stochastic Differential Equations by Linearization
581
We define L
0
as the generator associated to the lin-
earization of the system (1), i.e. the system
dX(t) = F X(t) dt +
U
u=1
G
u
X(t) dW
u
(t) (6)
or equivalently
dX
i
(t) =
j
F
i j
X
j
(t) dt +
U
u=1
j
G
u
i j
X
j
(t) dW
u
(t)
for i = 1, 2, . . . , d, which means that
L
0
V (x) = (7)
i, j
F
i j
x
j
i
V (x) +
1
2
i, j
k,l
x
k
x
l
U
u=1
G
u
ik
G
u
jl
!
2
i j
V (x).
We gather together the nonlinear parts of the full SDE
generator into the expression for E(x):
E(x) =
s
E
s
(x)
s
V (x)
| {z }
E
F
(x)
+
1
2
r,s
E
rs
(x)
2
rs
V (x)
| {z }
E
G
(x)
,
where
E
s
(x) =
1
2
j,k
x
j
x
k
R
s
jk
(x) and
E
rs
(x) =
1
2
k,l,m
x
k
x
l
x
m
U
u=1
(G
u
rk
R
us
lm
(x) + G
u
sk
R
ur
lm
(x))
+
1
4
k,l,m,n
x
k
x
l
x
m
x
n
U
u=1
R
ur
kl
(x)R
us
mn
(x).
The plan for the rest of this section is as follows:
With LV (x) broken up into a linear part L
0
V (x) and a
nonlinear correction E(x), we take the explicit func-
tion
V (x) = kxk
p
Q
=
x
>
Qx
p
2
(8)
as the ansatz for the Lyapunov function candidate,
where Q R
d×d
is a symmetric and positive definite
matrix and p > 0. As argued in (Hafstein et al., 2018,
§4) this is the expected form of a Lyapunov function
for the linearized system (6) just as x 7→ x
>
Px for a
symmetric and positive definite P is the usual form for
a Lyapunov function for a linear deterministic system
˙
x = Ax. Note that typically p < 2 so V is not differen-
tiable at the origin. For this reason take x 6= 0 in the
calculations below. Assuming that we have fixed Q
and p > 0 such that L
0
V (x) < 0 for all x R
d
\{0},
we derive a neighbourhood of the origin such that
|L
0
V (x)| > |E(x)|, which implies LV (x) < 0.
From (Hafstein et al., 2018, Lemma 4.1) we can
state the following: for V (x) = kxk
p
Q
we have
L
0
V (x) =
1
2
pkxk
p4
Q
H(x) for all x R
d
\{0},
where
H(x) = x
>
F
>
Q + QF +
U
u=1
(G
u
)
>
QG
u
!
xkxk
2
Q
+ (2 p)
U
u=1
1
2
x
>
(QG
u
+ (G
u
)
>
Q)x
2
.
This V is a Lyapunov function for the linear system
(6) if there is a constant C > 0 such that
H(x) Ckxk
2
Q
kxk
2
for all x R
d
,
because then
L
0
V (x)
1
2
pCkxk
p2
Q
kxk
2
(9)
for all x R
d
\{0}.
Before we state and prove our results we prove a
simple but useful lemma:
Lemma 3.1. Let A = (A
i j
),
e
A = (
e
A
i j
) R
d×d
be such
that |A
i j
|
e
A
i j
for i, j = 1, 2, . . . , d. Then
kAk k
e
Ak. (10)
In particular
i, j
x
i
A
i j
y
j
k
e
Akkxkkyk (11)
and
i, j,k
x
i
Q
ik
A
k j
y
j
k
e
AkkQ
1
2
kkxk
Q
kyk (12)
k
e
Akκ(Q)
1
2
kxk
Q
kyk
Q
.
for every symmetric and positive definite Q R
d×d
.
If AQ
1
2
= Q
1
2
A we even have
i, j,k
x
i
Q
ik
A
k j
y
j
k
e
Akkxk
Q
kyk
Q
. (13)
Proof. For x = (x
1
, x
2
, . . . , x
d
)
>
set
˜
x =
(|x
1
|, |x
2
|, . . . , |x
d
|)
>
. Clearly kxk = k
e
xk. The
estimate (10) follows from
kAxk
2
= x
>
A
>
Ax =
i, j,k
x
i
A
ki
A
k j
x
j
i, j,k
|x
i
|·|A
ki
|·|A
k j
|·|x
j
|
i, j,k
|x
i
|·
e
A
ki
e
A
k j
|x
j
| =
e
x
>
e
A
>
e
A
e
x = k
e
A
e
xk
2
k
e
Ak
2
k
e
xk
2
= k
e
Ak
2
kxk
2
and thus
kAk := sup
x6=0
kAxk
kxk
k
e
Ak.
CTDE 2018 - Special Session on Control Theory and Differential Equations
582
The inequality (12) follow from
|
i, j,k
x
i
Q
ik
A
k j
y
j
| =
i, j
x
i
k
Q
ik
A
k j
!
y
j
= |x
>
QAy| = |(Q
1
2
x)
>
Q
1
2
Ay|
kQ
1
2
xkkQ
1
2
Ayk = kxk
Q
kQ
1
2
Ayk
kxk
Q
kQ
1
2
kkAkkyk
k
e
AkkQ
1
2
kkQ
1
2
kkxk
Q
kyk
Q
and (11) follows form (12) with Q as the identity ma-
trix. To see (13) just note that if AQ
1
2
= Q
1
2
A we have
kQ
1
2
Ayk = kAQ
1
2
yk kAkkQ
1
2
yk k
e
Akkyk
Q
which can be used to improve the estimate above.
Remark 3.2. If A in (12) is symmetric we have
x
>
QAy =
i, j,k
x
i
Q
ik
A
k j
y
j
=
i, j,k
y
j
A
jk
Q
ki
x
i
= y
>
AQx.
Remark 3.3. For vectors x,
e
x R
d
, |x
i
|
e
x
i
for
i = 1, 2, . . . , d, we obviously have kxk k
e
xk, but in
general kxk
Q
is not necessarily smaller than k
e
xk
Q
.
Take for example x = (1, 1)
>
,
e
x = (1, 1)
>
, and
Q =
2 1
1 2
. Then kxk
Q
=
p
x
>
Qx =
6 but
kyk
Q
=
2. For this reason one cannot expect |A
i j
|
e
A
i j
to imply kAk
Q
k
e
Ak
Q
for matrices A,
e
A R
d×d
.
We now come to the main contribution of this pa-
per:
Theorem 3.4. Consider the system (1), assume that
V as in (8) is a Lyapunov function for its linearization
(6), and let C > 0 be a constant as in (9). Let ρ
> 0
and assume the estimates (3), (4), and (5) hold true
on N = D
:= {x R
d
: kxk
Q
ρ
}. Define
p
:= 1 + |p 2|,
R
i
:= kR
i
k,
R
ui
:= kR
ui
k,
R
F
:= k(R
1
, R
2
, . . . , R
d
)k,
R
u
G
:= k(R
u1
, R
u2
, . . . , R
ud
)k,
R
G
:= k(R
1
G
, R
2
G
, . . . , R
U
G
)k
2
,
e
E
G
:= kQ
1
2
k
R
F
+ p
U
u=1
R
u
G
kQ
1
2
G
u
Q
1
2
k
!
,
e
E
G
:=
1
4
p
κ(Q)R
G
.
Then
LV (x) = L
0
V (x) +E(x)
where L
0
V is defined in (7) and
|E(x)|
1
2
pkxk
p2
Q
kxk
2
·kxk
Q
e
E
G
+
e
E
G
kxk
Q
for x D
:= {x R
d
: kxk
Q
ρ
}. In particular,
V is a Lyapunov function for the nonlinear system (1)
satisfying the condition of Definition 2.4 on
U = D := {x R
d
: kxk
Q
ρ},
with
ρ < min
(
ρ
,
1
2
e
E
G
q
(
e
E
G
)
2
+ 4C
e
E
G
e
E
G
)
.
Proof. Let us first compute
s
V (x) and
2
rs
V (x),
s
V (x) =
j
Q
s j
x
j
+
i
Q
is
x
i
!
p
2
i, j
Q
i j
x
i
x
j
!
p
2
1
= p
i
x
i
Q
is
kxk
p2
Q
and
2
rs
V (x) = pQ
rs
kxk
p2
Q
+ p
j
x
j
Q
js
!
p
2
1
×2
i
x
i
Q
ir
!
i, j
Q
i j
x
i
x
j
!
p
2
2
= pkxk
p2
Q
Q
rs
+ p(p 2)
i, j
x
i
x
j
Q
ir
Q
js
kxk
p4
Q
= pkxk
p4
Q
Q
rs
kxk
2
Q
+ (p 2)
i, j
x
i
x
j
Q
ir
Q
js
!
.
Now set z = (z
1
, z
2
, . . . , z
d
)
>
with z
s
:= x
>
R
s
(x)x and
then |z
s
| R
s
kxk
2
and kzk kxk
2
R
F
for x D
.
Then
|E
F
(x)|
s
E
s
(x)
s
V (x)
p
2
kxk
p2
Q
s,i, j,k
x
j
x
k
R
s
jk
(x)x
i
Q
is
=
p
2
kxk
p2
Q
s,i
x
i
Q
is
j,k
x
j
R
s
jk
(x)x
k
!
=
p
2
kxk
p2
Q
s,i
x
i
Q
is
x
>
R
s
(x)x
=
p
2
kxk
p2
Q
s,i
x
i
Q
is
z
s
=
p
2
kxk
p2
Q
x
>
Qz
=
p
2
kxk
p2
Q
kQ
1
2
xkkQ
1
2
zk
p
2
kxk
p2
Q
kxk
Q
kQ
1
2
kkzk
p
2
kxk
p1
Q
kxk
2
kQ
1
2
kR
F
.
Local Lyapunov Functions for Nonlinear Stochastic Differential Equations by Linearization
583
Since E
G
(x) =
1
2
r,s
E
rs
(x)
2
rs
V (x) and by using
our expressions for E
rs
and
2
rs
V (x) we obtain:
|E
G
(x)|
1
4
pkxk
p4
Q
r,s,k,l,m
x
k
x
l
x
m
×
U
u=1
(G
u
rk
R
us
lm
(x) + G
u
sk
R
ur
lm
(x))
×
Q
rs
kxk
2
Q
+ (p 2)
i, j
x
i
x
j
Q
ir
Q
js
!
+
1
8
pkxk
p4
Q
r,s,k,l,m,n
x
k
x
l
x
m
x
n
×
Q
rs
kxk
2
Q
+ (p 2)
i, j
x
i
x
j
Q
ir
Q
js
!
×
U
u=1
R
ur
kl
(x)R
us
mn
(x)
.
We now estimate the expression on the right-hand
side term by term: Set z
u
= (z
u
1
, z
u
2
, . . . , z
u
d
)
>
, where
z
u
i
:= x
>
R
ui
(x)x, and then |z
u
i
| R
ui
kxk
2
and kz
u
k
kxk
2
R
u
G
for x D
. Then
r,s,k,l,m
x
k
x
l
x
m
U
u=1
G
u
rk
R
us
lm
(x)Q
rs
kxk
2
Q
= kxk
2
Q
U
u=1
r,s,k
x
k
l,m
x
l
R
us
lm
x
m
!
Q
sr
G
u
rk
= kxk
2
Q
U
u=1
r,s,k
x
k
x
>
R
us
x
Q
sr
G
u
rk
= kxk
2
Q
U
u=1
r,s,k
z
u
s
Q
sr
G
u
rk
x
k
= kxk
2
Q
U
u=1
(z
u
)
>
QG
u
x
= kxk
2
Q
U
u=1
(z
u
)
>
QG
u
Q
1
2
Q
1
2
x
kxk
3
Q
kxk
2
U
u=1
kQG
u
Q
1
2
kR
u
G
kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
and similarly
r,s,k,l,m
x
k
x
l
x
m
U
u=1
G
u
sk
R
ur
lm
(x)Q
rs
kxk
2
Q
= kxk
2
Q
U
u=1
r,s,k
x
>
R
ur
x
Q
rs
G
u
sk
x
k
= kxk
2
Q
U
u=1
r,s,k
z
u
r
Q
rs
G
u
sk
x
k
= kxk
2
Q
U
u=1
(z
u
)
>
QG
u
x
kxk
3
Q
kxk
2
U
u=1
kQG
u
Q
1
2
kR
u
G
kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
.
Further
r,s,k,l,m
x
k
x
l
x
m
U
u=1
G
u
rk
R
us
lm
(x)(p 2)
i, j
x
i
x
j
Q
ir
Q
js
= (p 2)
U
u=1
j,s
i,k,r
x
i
Q
ir
G
u
rk
x
k
!
×
l,m
x
l
R
us
lm
(x)x
m
!
Q
s j
x
j
= (p 2)
U
u=1
j,s
x
>
QG
u
x
x
>
R
us
x
Q
s j
x
j
= (p 2)
U
u=1
j,s
x
>
QG
u
Q
1
2
Q
1
2
x
x
>
R
us
x
Q
s j
x
j
|p 2|
U
u=1
kxk
2
Q
kQ
1
2
G
u
Q
1
2
k
j,s
z
u
s
Q
s j
x
j
|p 2|kxk
2
Q
U
u=1
kQ
1
2
G
u
Q
1
2
k
(z
u
)
>
Qx
|p 2|kxk
2
Q
U
u=1
kQ
1
2
G
u
Q
1
2
kkz
u
kkQ
1
2
kkxk
Q
|p 2|kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
CTDE 2018 - Special Session on Control Theory and Differential Equations
584
and similarly
r,s,k,l,m
x
k
x
l
x
m
U
u=1
G
u
sk
R
ur
lm
(x)(p 2)
i, j
x
i
x
j
Q
ir
Q
js
= (p 2)
U
u=1
i,r
j,k,s
x
i
Q
js
G
u
sk
x
k
!
×
l,m
x
l
R
ur
lm
(x)x
m
!
Q
ri
x
i
= (p 2)
U
u=1
i,r
x
>
QG
u
x
x
>
R
ur
x
Q
ri
x
i
|p 2|
U
u=1
kxk
2
Q
kQ
1
2
G
u
Q
1
2
k
i,r
z
u
r
Q
ri
x
i
|p 2|kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
.
Further
r,s,k,l,m,n
x
k
x
l
x
m
x
n
Q
rs
kxk
2
Q
U
u=1
R
ur
kl
(x)R
us
mn
(x)
= kxk
2
Q
U
u=1
r,s
k,l
x
k
R
ur
kl
(x)x
l
!
Q
rs
×
m,n
x
m
R
us
mn
(x)x
n
!
= kxk
2
Q
U
u=1
r,s
z
u
r
Q
rs
z
u
s
= kxk
2
Q
U
u=1
(z
u
)
>
Qz
u
kxk
2
Q
U
u=1
kQkkz
u
k
2
kxk
2
Q
kxk
4
kQk
U
u=1
(R
u
G
)
2
= kxk
2
Q
kxk
4
kQkR
G
kxk
4
Q
kxk
2
kQ
1
kkQkR
G
= kxk
4
Q
kxk
2
κ(Q)R
G
.
Finally
r,s,k,l,m,n
x
k
x
l
x
m
x
n
(p 2)
i, j
x
i
x
j
Q
ir
Q
js
U
u=1
R
ur
kl
(x)R
us
mn
(x)
= (p 2)
U
u=1
i, j,r,s
x
i
Q
ir
k,l
x
k
R
ur
kl
(x)x
l
!
x
j
Q
js
×
m,n
x
m
R
us
mn
(x)x
n
!
= (p 2)
U
u=1
i, j,r,s
x
i
Q
ir
z
u
r
x
j
Q
js
z
u
s
= (p 2)
U
u=1
i,r
x
i
Q
ir
z
u
r
!
i,r
x
j
Q
js
z
u
s
!
= (p 2)
U
u=1
x
>
Qz
u
2
|p 2|
U
u=1
kxk
2
Q
kQ
1
2
k
2
kz
u
k
2
|p 2|kxk
2
Q
kxk
4
kQkR
G
|p 2|kxk
4
Q
kxk
2
kQ
1
kkQkR
G
= |p 2|kxk
4
Q
kxk
2
κ(Q)R
G
.
By combining the results from these estimates we get
|E
G
(x)|
1
4
pkxk
p4
Q
2kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
+ 2|p 2|kxk
3
Q
kxk
2
kQ
1
2
k
U
u=1
kQ
1
2
G
u
Q
1
2
kR
u
G
+
1
8
pkxk
p4
Q
kxk
4
Q
kxk
2
κ(Q)R
G
+ |p 2|kxk
4
Q
kxk
2
κ(Q)R
G
=
1
2
pkxk
p1
Q
kxk
2
(1 + |p 2|)
×
kQ
1
2
k
U
u=1
R
u
G
kQ
1
2
G
u
Q
1
2
k+
1
4
κ(Q)R
G
kxk
Q
!
and we can estimate
|E(x)| |E
F
(x)|+ |E
G
(x)|
1
2
pkxk
p2
Q
kxk
2
·kxk
Q
e
E
G
+
e
E
G
kxk
Q
,
which proves the first stated inequality.
Local Lyapunov Functions for Nonlinear Stochastic Differential Equations by Linearization
585
Since
LV (x) = L
0
V (x) +E(x)
1
2
pCkxk
p2
Q
kxk
2
+ E(x)
1
2
pkxk
p2
Q
kxk
2
h
C kxk
Q
e
E
G
+
e
E
G
kxk
Q
i
we have LV (x) < 0 if
kxk
Q
e
E
G
+
e
E
G
kxk
Q
< C,
i.e.
kxk
Q
<
e
E
G
+
q
(
e
E
G
)
2
+ 4C
e
E
G
2
e
E
G
.
Thus for
x D = {x R
d
: kxk
Q
ρ}
with
ρ < min
(
ρ
,
1
2
e
E
G
q
(
e
E
G
)
2
+ 4C
e
E
G
e
E
G
)
,
we have LV (x) < 0, which concludes the proof.
4 CONCLUSIONS
We derived rigid bounds on a domain, on which a
Lyapunov function for a linearized stochastic differ-
ential equation is also a Lyapunov function for the
original nonlinear system. This allows for the deriva-
tion of a lower bound on the equilibrium’s γ-basin of
attraction, i.e. the area in which all started solutions
converge to the equilibrium with probability no less
than γ. Another application is the facilitation of a nu-
merical method to compute Lyapunov functions for
nonlinear stochastic differential equations on a larger
domain as discussed in (Gudmundsson and Hafstein,
2018), because one first needs a local Lyapunov func-
tion at the equilibrium.
ACKNOWLEDGEMENT
The research done for this paper was supported by the
Icelandic Research Fund (Rann
´
ıs) in the project ‘Lya-
punov Methods and Stochastic Stability’ (152429-
051), which is gratefully acknowledged.
REFERENCES
Gudmundsson, S. and Hafstein, S. (2018). Probabilistic
basin of attraction and its estimation using two Lya-
punov functions. Complexity, Article ID:2895658.
Hafstein, S. (2004). A constructive converse Lyapunov the-
orem on exponential stability. Discrete Contin. Dyn.
Syst. Ser. A, 10(3):657–678.
Hafstein, S., Gudmundsson, S., Giesl, P., and Scalas,
E. (2018). Lyapunov function computation for au-
tonomous linear stochastic differential equations us-
ing sum-of-squares programming. Discrete Contin.
Dyn. Syst. Ser. B, 2(23):939–956.
Kallenberg, O. (2002). Foundations of Modern Probability.
Springer, 2 edition.
Khasminskii, R. (2012). Stochastic stability of differential
equations. Springer, 2nd edition.
Mao, X. (2008). Stochastic Differential Equations and Ap-
plications. Woodhead Publishing, 2nd edition.
CTDE 2018 - Special Session on Control Theory and Differential Equations
586