Contraction Metrics by Numerical Integration and Quadrature:
Uniform Error Estimate
Peter Giesl
1 a
, Sigurdur Hafstein
2 b
and Iman Mehrabinezhad
2 c
1
Department of Mathematics, University of Sussex, Falmer, BN1 9QH, U.K.
2
Science Institute, University of Iceland, Dunhagi 3, 107 Reykjav
´
ık, Iceland
Keywords:
Contraction Metric, Numerical Method, Error Estimate.
Abstract:
We show that contraction metrics for continuous time dynamical systems can be computed numerically using
numerical integration of certain initial value problems with a subsequent numerical quadrature. Further, we
show that for any compact subset of an equilibrium’s basin of attraction and any ε > 0, the parameters for
the numerical methods, i.e. the integration interval and the step-size, can be chosen such that the error in the
contraction metric is less than ε at any point in the compact subset. These results will be used as a part of a
numerical method to rigorously compute contraction metrics.
1 INTRODUCTION
We consider the system
˙
x = f(x), f C
s
(R
n
;R
n
), s 1. (1)
The solution x(t) to the initial value problem (1) with
x(0) = ξ
ξ
ξ is denoted by φ
φ
φ(t,ξ
ξ
ξ). A contraction metric
for system (1) is a Riemannian metric defined on a
positively invariant set of the dynamics, such that the
distance between adjacent trajectories is decreasing
with respect to the metric. The existence of a contrac-
tion metric asserts the existence of exactly one equi-
librium point inside a positively invariant and con-
nected set and that it is exponentially stable.
Contraction metrics have received considerable
attention in the literature (Lewis, 1949; Lewis, 1951;
Demidovi
ˇ
c, 1961; Krasovski
˘
i, 1963; Borg, 1960;
Hartman, 1961; Hartman, 1964; Lohmiller and Slo-
tine, 1998; Aminzare and Sontag, 2014; Simpson-
Porco and Bullo, 2014; Forni and Sepulchre, 2014;
Giesl, 2015), as they can characterize the long term
behavior of system (1). Since many phenomena in
engineering and science are modelled by system (1),
contraction metrics are of much value in understand-
ing real-word systems.
Since the analytical computation of a contraction
metric for a nonlinear system is notoriously diffi-
a
https://orcid.org/0000-0003-1421-6980
b
https://orcid.org/0000-0003-0073-2765
c
https://orcid.org/0000-0002-6346-9901
cult, numerical methods have been considered (Ayl-
ward et al., 2008; Giesl and Hafstein, 2013; Giesl,
2019; Giesl et al., 2023a), see also the recent re-
view (Giesl et al., 2023b). To advance such methods
we present a novel theorem that shows that contrac-
tion metrics can be approximated arbitrarily close to
the analytic solution, uniformly on any compact sub-
set K of an exponentially stable equilibrium’s basin
of attraction, using numerical integration and quadra-
ture. These results are essential in developing com-
bined approximation-verification methods to rigor-
ously compute contraction metrics, as in (Giesl et al.,
2021a; Giesl et al., 2021b), but using numerical in-
tegration and quadrature for the approximation in-
stead of generalized interpolation in reproducing ker-
nel Hilbert spaces (Giesl et al., 2023c).
Let us give an overview of the paper: In Sec-
tion 2, we recall some facts about contraction metrics,
including an existence result of a contraction met-
ric given by an integral formula. In Section 3, we
numerically approximate a contraction metric using
numerical integration, with the fourth-order Adams-
Bashforth (AB4) multi-step scheme initialized with
fourth-order Runge-Kutta (RK4), and subsequently
we use numerical quadrature to approximately inte-
grate the results; we also derive error bounds for these
methods. These estimates are then used to prove the
main result of the paper, Theorem 4.1 presented in
Section 4, before we conclude our work in Section 5.
Notation: We write N
0
:= {0, 1, 2, . . . , } for the natu-
ral numbers, including zero, and N
+
:= N
0
\ {0} for
196
Giesl, P., Hafstein, S. and Mehrabinezhad, I.
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate.
DOI: 10.5220/0012183300003543
In Proceedings of the 20th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2023) - Volume 1, pages 196-205
ISBN: 978-989-758-670-5; ISSN: 2184-2809
Copyright © 2023 by SCITEPRESS Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
the positive natural numbers. We denote the usual
p-norms on R
n
and the corresponding induced ma-
trix norms by ·
p
, 1 p < . For both vectors
in R
n
and matrices in R
n×n
we write ·
max
for
the maximum absolute value norm, i.e. x
max
:=
max
i=1,2,...,n
|x
i
| for a vector x R
n×n
and A
max
:=
max
i, j=1,2,...,n
|a
i j
| for a matrix A =
a
i j
R
n×n
.
Apart from the usual equivalence estimates for the p-
norms on R
n
, recall the norm equivalence A
max
A
2
nA
max
for a matrix A R
n×n
and that
·
max
is not sub-multiplicative, but Ab
max
nA
max
b
max
for b := B R
n×n
or b := b R
n
. We
denote the symmetric n × n matrices with real entries
by S
n×n
and we write I for the n × n identity matrix
(n can always be determined from the context). A B
for A,B S
n×n
means that the matrix AB is negative
semi-definite, i.e. x
T
(A B)x 0 for all x R
n
.
2 CONTRACTION METRICS
We first review basic concepts about Riemannian con-
traction metrics that are used in this paper.
Definition 2.1. (Riemannian metric, contraction met-
ric) Let K be a compact subset of an open set G R
n
.
A function M C
1
(G;S
n×n
) is called a Riemannian
metric if M(x) is positive definite at every x G. The
Riemannian metric M is said to be a contraction met-
ric for system (1), contracting on K, if
M(x)Df(x)+ Df(x)
T
M(x)+ M
(x) 2νM(x),
for some ν > 0 and all x K.
In Definition 2.1
M
(x) :=
d
dt
M(φ
φ
φ(t,x))
t=0
=
M
i j
(x) · f(x)
i, j∈{1,2,...,n}
is the orbital derivative of M along the solutions of
(1).
The following theorem follows directly from
(Giesl and Wendland, 2019, Thms. 2.2 and 2.3). In
the formula (2), τ 7→ φ
φ
φ(τ,x) is the solution to (1) with
initial value φ
φ
φ(0,x) = x and τ 7→ ψ(τ,x) is the matrix-
valued solution to
˙
Y = Df(φ
φ
φ(t,x))Y, Y (0) = I,
i.e. τ 7→ ψ(τ,x) is the principal fundamental matrix
solution.
Theorem 2.2. (Existence of a contraction metric)
Let f C
s
(R
n
;R
n
), s 2. Let x
0
be an exponen-
tially stable equilibrium of (1) with basin of attrac-
tion A (x
0
) := {x R
n
: lim
t
φ
φ
φ(t,x) = x
0
}. Let
C C
s1
(A(x
0
);S
n×n
) be such that C(x) is a pos-
itive definite matrix for all x A (x
0
). Then M
C
s1
(A(x
0
);S
n×n
), given by the formula
M(ξ
ξ
ξ) =
Z
0
ψ(τ,ξ
ξ
ξ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))ψ(τ,ξ
ξ
ξ)dτ, (2)
is a contraction metric for (1), that is contracting on
any compact K A (x
0
).
In the following section we will show that the con-
traction metric M(ξ
ξ
ξ) in formula (2) can be estimated
arbitrarily close by using numerical integration and
numerical quadrature.
3 ESTIMATION METHOD
In this section we describe in detail how we estimate
M(ξ
ξ
ξ) in formula (2) by
e
M(ξ
ξ
ξ) at a point ξ
ξ
ξ R
n
in
three steps and we conclude with an error estimate in
Theorem 4.1. We first fix a matrix-valued function
C C
s1
(R
n
;S
n×n
), which in practice can be taken
simply as the constant identity matrix I R
n×n
, a
time-horizon H > 0, and a set of points X, at which
we compute values for our metric
e
M inspired by (2).
For ξ
ξ
ξ X we first compute a numerical approxima-
tion
e
φ
φ
φ : [0,H] R
n
to the initial-value problem
˙
x = f(x), x(0) = ξ
ξ
ξ, (3)
on the time-horizon [0,H]. We do this by fixing the
number of time-steps N and the corresponding length
of a uniform time-step h := H/N and then generate
a sequence of vectors
e
φ
φ
φ
i
, i = 0,1,...,N, such that
e
φ
φ
φ
i
approximates the true solution φ
φ
φ(·,ξ
ξ
ξ) to the initial-
value problem (3) at time t
i
:= ih, i.e.
e
φ
φ
φ
i
φ
φ
φ(t
i
,ξ
ξ
ξ).
In the sequel, when we refer to
e
φ
φ
φ =
e
φ
φ
φ(t,ξ
ξ
ξ) as a func-
tion, we mean the linear interpolation of the values
e
φ
φ
φ
i
,
i.e.
e
φ
φ
φ(t) =
t t
i
t
i+1
t
i
(
e
φ
φ
φ
i+1
e
φ
φ
φ
i
) +
e
φ
φ
φ
i
when t
i
t t
i+1
.
Then we use our approximate solution
e
φ
φ
φ to (3) to ob-
tain an approximation
e
Y of the principal fundamental
matrix solution to
˙
Y = Df(φ
φ
φ(t,ξ
ξ
ξ))Y . That is, we solve
numerically the matrix-valued initial-value problem
˙
Y = g(t)Y, Y (0) = I, (4)
where Df(φ
φ
φ(t,ξ
ξ
ξ)) has been substituted by the approx-
imation g(t) := Df(
e
φ
φ
φ(t,ξ
ξ
ξ)). Finally, we use our nu-
merical solutions
e
φ
φ
φ to (3) and
e
Y to (4) to compute an
approximation
e
M(ξ
ξ
ξ)
Z
H
0
Y (τ)
T
C (φ
φ
φ(τ,ξ
ξ
ξ))Y (τ)dτ (5)
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate
197
to M(ξ
ξ
ξ) using a Romberg-like numerical quadrature.
Note that there are several approximations to the ac-
tual integral: firstly, we use a Romberg-like numeri-
cal quadrature to compute the integral, for which we
now only need values of the integrand at discrete time
steps; secondly, we replace the values of the integrand
with our numerical solutions
e
φ
φ
φ
i
and
e
Y
i
to (3) and (4),
respectively.
For the initial value problems (3) and (4) we use
the Adams-Bashforth method of order 4 (AB4) ini-
tialized with the usual Runge-Kutta method of order
4 (RK4). The formula for a general initial-value prob-
lem of the form
˙
z = v(t,z), z(t
0
) = ξ
ξ
ξ, (6)
to generate the approximations
e
z
i
z(t
i
,ξ
ξ
ξ), where
t
i
= ih + t
0
and z(t
i
,ξ
ξ
ξ) is the value of the true solu-
tion t 7→ z(t, ξ
ξ
ξ) to (6) at time t
i
, is
e
z
i+1
=
e
z
i
+
h
24
(55v
i
59v
i1
+37v
i2
9v
i3
), (7)
for AB4, where v
i
:= v(t
i
,
e
z
i
). Since AB4 is a multi-
step method we use RK4 to compute the first 3 steps
e
z
1
,
e
z
2
,
e
z
3
after
e
z
0
:= ξ
ξ
ξ, needed to initialize it. In more
detail, for the initialization we set
k
1
= hv(t
i
,
e
z
i
) (8)
k
2
= hv(t
i
+ h/2,
e
z
i
+ k
1
/2)
k
3
= hv(t
i
+ h/2,
e
z
i
+ k
2
/2)
k
4
= hv(t
i
+ h,
e
z
i
+ k
3
)
e
z
i+1
=
e
z
i
+
1
6
(k
1
+ 2k
2
+ 2k
3
+ k
4
)
for i = 0,1,2.
In the following, we fix a compact set S R
n
which is positively invariant, both for the solution
φ
φ
φ and its numerical approximation sequences
e
φ
φ
φ
i
.
Hence, the values
e
φ
φ
φ(t), t [0,H], are in the convex
hull of S, independent of the initial value ξ
ξ
ξ S. Note
that we have a Lipschitz constant for any continuously
differentiable functions on S or its (compact) convex
hull. Level sets of numerically computed Lyapunov-
like functions can be used for identifying such sets,
see (Giesl et al., 2023d, Thm. 3.4). In (Giesl et al.,
2023d, Thms. 2.1 and 3.5) it is established that for
a given compact set K A (x
0
) there exists a com-
pact set S K, which is positively invariant for both
the solution and its numerical approximation by our
numerical method, i.e. AB4 initialized with RK4, if
the time-steps h are sufficiently small, namely h h
,
where h
> 0 is a constant depending on K and f. Sub-
sequently we additionally assume that h is smaller
than other constants for additional estimates to hold
true.
In the following subsections we explain and es-
timate the errors of the numerical approximations to
the solution φ
φ
φ(t,ξ
ξ
ξ), the solution of
˙
Y = Df(φ
φ
φ(t,ξ
ξ
ξ))Y
and finally the numerical quadrature of the integral in
preparation for the main result, Theorem 4.1.
3.1 Step I: Numerical Approximation of
˙
x = f(x)
For the initial value problem (3) we use AB4 initial-
ized with RK4; i.e. we set v(t,z) = f(z) in (6). The
true solution z(t,ξ
ξ
ξ) is denoted by φ
φ
φ(t,ξ
ξ
ξ) and its ap-
proximations at t
i
= hi by
e
φ
φ
φ
i
. Thus, the formulas (8)
become
k
1
= hf(
e
φ
φ
φ
i
) (9)
k
2
= hf(
e
φ
φ
φ
i
+ k
1
/2)
k
3
= hf(
e
φ
φ
φ
i
+ k
2
/2)
k
4
= hf(
e
φ
φ
φ
i
+ k
3
)
e
φ
φ
φ
i+1
=
e
φ
φ
φ
i
+
1
6
(k
1
+ 2k
2
+ 2k
3
+ k
4
),
where i = 0, 1, 2. And for i 3 we use AB4 and (7)
becomes
e
φ
φ
φ
i+1
=
e
φ
φ
φ
i
(10)
+
h
24
55f(
e
φ
φ
φ
i
) 59f(
e
φ
φ
φ
i1
) + 37f(
e
φ
φ
φ
i2
) 9f(
e
φ
φ
φ
i3
)
.
In the following we assume that a fixed time-horizon
H > 0 is given such that t
i
H for all i. The error
of approximation in the initializing step using RK4 is
bounded by C
1
h
5
, and the rest of the
e
φ
φ
φ
i
sequence us-
ing AB4 is bounded by the local error C
2
h
5
, where C
1
and C
2
are constants that depend on the fourth order
derivative of f in S, which is positively invariant both
for the true solution and the approximation, see (Giesl
et al., 2023d, Thm. 3.5). We recall below the well
known results, that then the global error is bounded
by C
4
h
4
on the interval [0,H].
Since we initialized the AB4 method using RK4
and not the exact values of the corresponding right-
hand side function f, there will be a small error ac-
cumulating through the algorithm. In more detail, let
L > 0 be a Lipschitz constant for f on S R
n
. If y and
z are solutions in S with initial conditions y(a) and
z(a) at time t = a respectively, then it is well known
that Gronwall’s inequality delivers
y(t) z(t)
max
e
L|ta|
y(a) z(a)
max
(11)
as long as the solutions stay in S.
Let us recall how the local errors accumulate to
form global errors in the multi-step scenario, see
e.g. (Sauer, 2012; Deuflhard and Hohmann, 2008),
ICINCO 2023 - 20th International Conference on Informatics in Control, Automation and Robotics
198
because we will use similar reasoning in the follow-
ing. At the initial condition
e
x
0
= ξ
ξ
ξ, the global error is
g
0
=
e
x
0
x
0
max
=
ξ
ξ
ξ ξ
ξ
ξ
max
= 0. After one step,
there is no accumulated error from previous steps, and
the bound on the global error is the local truncation er-
ror, g
1
= e
1
=
e
x
1
x
1
max
C
1
h
5
. After two steps,
we break g
2
down into the local truncation error plus
the accumulated error from the earlier step. Define
z(t) to be the solution of the initial value problem
˙
x = f(x)
x(t
1
) =
e
x
1
,
t [t
1
,t
2
]
(12)
Thus, z(t
2
) is the exact value of the solution at t = t
2
starting at initial condition (t
1
,
e
x
1
). Note that if we
used the initial condition (t
1
,x
1
), we would get x
2
,
which is on the actual solution curve, unlike z(t
2
).
Then e
2
=
e
x
2
z(t
2
)
max
C
1
h
5
is the local trun-
cation error of step i = 2. The other difference
z(t
2
) x
2
max
is covered by equation (11), since it
is the difference between two solutions of the same
equation with different initial conditions
e
x
1
and x
1
.
Therefore,
g
2
=
e
x
2
x
2
max
e
x
2
z(t
2
)
max
+
z(t
2
) x
2
max
e
2
+ e
Lh
g
1
= e
2
+ e
Lh
e
1
C
1
h
5
1 + e
Lh
.
The argument is the same for step i = 3, which yields
g
3
=
e
x
3
x
3
max
e
3
+ e
Lh
g
2
e
3
+ e
Lh
e
2
+ e
2Lh
e
1
C
1
h
5
1 + e
Lh
+ e
2Lh
.
For step i = 4, the initializing phase with RK4 is fin-
ished and we have a different local truncation error
term, namely e
4
=
e
x
4
z(t
4
)
max
C
2
h
5
. Thus,
g
4
=
e
x
4
x
4
max
e
x
4
z(t
4
)
max
+
z(t
4
) x
4
max
e
4
+ e
Lh
g
3
C
2
h
5
+ e
Lh
C
1
h
5
1 + e
Lh
+ e
2Lh
C
3
h
5
1 + e
Lh
+ e
2Lh
+ e
3Lh
,
where we have introduced the new constant C
3
:=
max(C
1
,C
2
) > 0. Now the rest can be done similarly
to get the global truncation error at any step i.
g
i
=
e
x
i
x
i
max
e
i
+ e
Lh
e
i1
+ e
2Lh
e
i2
+ ... + e
(i1)Lh
e
1
C
3
h
5
1 + e
Lh
+ .. . + e
(i1)Lh
= C
3
h
5
e
iLh
1
e
Lh
1
C
3
h
4
L
e
Lt
i
1
C
3
L
e
LH
1
h
4
=: C
4
h
4
, (13)
where C
4
depends on H. Applying this result to x
i
=
φ
φ
φ(t
i
,ξ
ξ
ξ) and
e
x
i
=
e
φ
φ
φ
i
gives us the desired estimate.
3.2 Step II: Numerical Approximation
of
˙
Y = G(t)Y
Once more, we use AB4 and RK4 to solve the initial
value problem (4) numerically. We are still assuming
that the time-horizon H > 0 from Step I is fixed. Note
that with y
1
(t),y
2
(t),...,y
n
(t) as the column vectors
of the matrix Y = Y (t) in (4), i.e.
Y (t) =
| | |
y
1
(t) y
2
(t) ··· y
n
(t)
| | |
,
the matrix-valued initial-value problem (4) boils
down to the vector-valued initial-value problems
˙
y
j
= g(t)y
j
, y
j
(0) = e
j
, j = 1, 2, . . . , n, (14)
where e
j
is the usual jth unit vector in R
n
.
Note that for y(t) = y
j
(t) with any j =
1,. . . , n and g(t) = Df(φ
φ
φ(t,ξ
ξ
ξ)) we have with M =
max
xS
Df(x)
2
in the positively invariant set S
y(t)
max
y(t)
2
y(0)
2
+
Z
t
0
Df(φ
φ
φ(s,ξ
ξ
ξ))
2
y(s)
2
ds
y(0)
2
+
Z
t
0
M y(s)
2
ds
y(0)
2
exp(tM)
ny(0)
max
exp(tM),
where we have used Gronwall’s lemma (Walter,
1998). Hence, there exists a constant such that
Y (t) C holds for all t [0,H]; note that C de-
pends on H. This also implies that the derivatives with
respect to (t, y) up to order 4 of the right-hand side,
namely Df(φ
φ
φ(t,ξ
ξ
ξ))y are bounded, if f C
5
, uniformly
for ξ
ξ
ξ S, and the approximation using RK4 and AB4
is bounded by a constant times h
4
. We will assume
that, in addition to the previous assumptions h h
that h is also bounded by the constant min(h
,h
∗∗
,1),
which depends on H and S, see (22) and (25).
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate
199
However, in our computations we need to replace
φ
φ
φ(·,ξ
ξ
ξ) by
e
φ
φ
φ φ
φ
φ(·,ξ
ξ
ξ). To study the error, we will al-
together consider and compare three approximate so-
lutions of
˙
Y = Df(φ
φ
φ(t,ξ
ξ
ξ))Y , Y (0) = I:
1. We denote by Y (t) the solution of
˙
Y =
Df(φ
φ
φ(t,ξ
ξ
ξ))Y , Y(0) = I at a given time t. The val-
ues at times t
i
are denoted by Y
i
= Y (t
i
).
2. By
e
Y
φ
φ
φ
i
we denote the numerical approximation
of Y (t) at time t
i
using RK4 and AB4 with the
true solution φ
φ
φ in the formulas; these formulas use
k
1
,. . . , k
4
, see (15) and (16).
3. Finally, we denote by
e
Y
i
the approximation of
Y (t), using
e
φ
φ
φ in the RK4 and AB4 formula; these
formulas use k
1
,. . . , k
4
, see (17) and (18).
The RK4 formulas (8) for the values
e
Y
φ
φ
φ
i
with the
correct solution φ
φ
φ are given by
k
1
= hDf(φ
φ
φ
i
)
e
Y
φ
φ
φ
i
(15)
k
2
= hDf(φ
φ
φ
i+
1
2
)(
e
Y
φ
φ
φ
i
+ k
1
/2)
k
3
= hDf(φ
φ
φ
i+
1
2
)(
e
Y
φ
φ
φ
i
+ k
2
/2)
k
4
= hDf(φ
φ
φ
i+1
)(
e
Y
φ
φ
φ
i
+ k
3
)
e
Y
φ
φ
φ
i+1
=
e
Y
φ
φ
φ
i
+
1
6
(k
1
+ 2k
2
+ 2k
3
+ k
4
).
Note that we need φ
φ
φ(t
i
+ h/2,ξ
ξ
ξ), which we denoted
φ
φ
φ
i+
1
2
in the formulas. The AB4 method for (4) is
given by
e
Y
φ
φ
φ
i+1
=
e
Y
φ
φ
φ
i
+
h
24
55Df(φ
φ
φ
i
)
e
Y
φ
φ
φ
i
59Df(φ
φ
φ
i1
)
e
Y
φ
φ
φ
i1
+ 37Df(φ
φ
φ
i2
)
e
Y
φ
φ
φ
i2
9Df(φ
φ
φ
i3
)
e
Y
φ
φ
φ
i3
, (16)
and we do not need φ
φ
φ
i+
1
2
for i 3.
For our actual computation of the values
e
Y
i
, we
use estimated values
e
φ
φ
φ
i
of φ
φ
φ(t
i
,ξ
ξ
ξ). Hence, the RK4
formulas are given by
k
1
= hDf(
e
φ
φ
φ
i
)
e
Y
i
(17)
k
2
= hDf(
e
φ
φ
φ
i+
1
2
)(
e
Y
i
+ k
1
/2)
k
3
= hDf(
e
φ
φ
φ
i+
1
2
)(
e
Y
i
+ k
2
/2)
k
4
= hDf(
e
φ
φ
φ
i+1
)(
e
Y
i
+ k
3
)
e
Y
i+1
=
e
Y
i
+
1
6
k
1
+ 2k
2
+ 2k
3
+ k
4
.
Note that we need estimates
e
φ
φ
φ
i+
1
2
φ
φ
φ(t
i
+ h/2, ξ
ξ
ξ) in
the formulas. Thus, we need to use RK4 with time-
steps h
0
:= h/2 for the initial-value problem (3), when
we want to use its results in the RK4 formula (17)
for the initial-value problem (4) to compute the values
e
Y
1
,
e
Y
2
,
e
Y
3
. The AB4 method is given by
e
Y
i+1
=
e
Y
i
+
h
24
55Df(
e
φ
φ
φ
i
)
e
Y
i
59Df(
e
φ
φ
φ
i1
)
e
Y
i1
+ 37Df(
e
φ
φ
φ
i2
)
e
Y
i2
9Df(
e
φ
φ
φ
i3
)
e
Y
i3
, (18)
and we do not need
e
φ
φ
φ
i+
1
2
for i 3.
Similar to the arguments of the previous subsec-
tion, we will show that the global error of approxima-
tion Y
i
e
Y
i
max
is bounded by C
5
h
4
for a constant
C
5
> 0. In this case, however, there are two sources
for the error; one is using the RK4 and AB4 methods
to solve the differential equation (4) numerically and
the other comes from the fact that we use the approx-
imations
e
φ
φ
φ
i
instead of the correct values φ
φ
φ(t
i
,ξ
ξ
ξ).
For the difference between Y
i
and
e
Y
φ
φ
φ
i
we will use
the global error estimates of RK4 and AB4, while for
the difference between
e
Y
φ
φ
φ
i
and
e
Y
i
, we estimate using
the formulas directly.
We start with the first task and note that a similar
result to (13) holds also in the case of time-dependent
right-hand sides to conclude that
Y
i
e
Y
φ
φ
φ
i
max
C
5
h
4
(19)
for i = 1,. . . , N, where Nh = H and the constant C
5
depends on H and the derivatives of up to order 5
of f in S. Note that the derivatives up to order 4 of
v(t,z) = Df(φ
φ
φ(t,ξ
ξ
ξ))z for ξ
ξ
ξ S exist and are uniformly
bounded by a constant; see the argumentation at the
beginning of the section. This implies that there is a
constant C
6
> 0 such that
e
Y
φ
φ
φ
i
max
≤ ∥Y
i
max
+ Y
i
e
Y
φ
φ
φ
i
max
C
6
+C
5
h
4
=: C
7
(20)
is bounded for all for i = 1, . . . , N, where Nh = H, and
we have used that h 1.
Now we proceed with the second task and first
show that
e
Y
φ
φ
φ
i+1
e
Y
i+1
max
2
e
Y
φ
φ
φ
i
e
Y
i
max
+C
8
h
5
. (21)
Comparing the formulas (15) and (17), we have, using
AB
max
nA
max
B
max
,
k
1
k
1
max
= h
Df(φ
φ
φ
i
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i
)
e
Y
i
max
hn
Df(φ
φ
φ
i
) Df(
e
φ
φ
φ
i
)
max
e
Y
φ
φ
φ
i
max
+ hn
Df(
e
φ
φ
φ
i
)
max
e
Y
φ
φ
φ
i
e
Y
i
max
.
Using that Df is locally Lipschitz continuous and thus
has a Lipschitz constant L
D f
on the compact convex
ICINCO 2023 - 20th International Conference on Informatics in Control, Automation and Robotics
200
hull of S and that the solution φ
φ
φ and its numerical ap-
proximation
e
φ
φ
φ starting at ξ
ξ
ξ S lie in the convex hull,
we obtain
Df(φ
φ
φ
i
) Df(
e
φ
φ
φ
i
)
max
L
D f
C
4
h
4
by (13).
We define L
v
= max
xS
Df(x)
max
. Altogether, we have
k
1
k
1
max
L
D f
nC
4
h
5
e
Y
φ
φ
φ
i
max
+ hnL
v
e
Y
φ
φ
φ
i
e
Y
i
max
.
Now we proceed in a similar way with k
2
. From
the formula (15) we have k
1
max
nhL
v
e
Y
φ
φ
φ
i
max
and thus
k
2
k
2
max
=
h
Df(φ
φ
φ
i+
1
2
)
e
Y
φ
φ
φ
i
+
k
1
2
Df(
e
φ
φ
φ
i+
1
2
)
e
Y
i
+
k
1
2
max
h
Df(φ
φ
φ
i+
1
2
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i+
1
2
)
e
Y
i
max
+
1
2
h
Df(φ
φ
φ
i+
1
2
)k
1
Df(
e
φ
φ
φ
i+
1
2
)k
1
max
hn
Df(φ
φ
φ
i+
1
2
) Df(
e
φ
φ
φ
i+
1
2
)
max
e
Y
φ
φ
φ
i
max
+ hn
Df(
e
φ
φ
φ
i+
1
2
)
max
e
Y
φ
φ
φ
i
e
Y
i
max
+
1
2
hn
Df(φ
φ
φ
i+
1
2
) Df(
e
φ
φ
φ
i+
1
2
)
max
k
1
max
+
1
2
hn
Df(
e
φ
φ
φ
i+
1
2
)
max
k
1
k
1
max
L
D f
nC
4
h
5
e
Y
φ
φ
φ
i
max
+ hnL
v
e
Y
φ
φ
φ
i
e
Y
i
max
+
1
2
L
D f
n
2
C
4
h
6
L
v
e
Y
φ
φ
φ
i
max
+
1
2
hnL
v
h
L
D f
nC
4
h
5
e
Y
φ
φ
φ
i
max
+ hnL
v
e
Y
φ
φ
φ
i
e
Y
i
max
i
C
9
h
5
e
Y
φ
φ
φ
i
max
+C
10
h
e
Y
φ
φ
φ
i
e
Y
i
max
,
where we have used h 1 and new constants.
Now we proceed in a similar way with k
3
; note
that by the formula (15) we have
k
2
max
nhL
v
1 +
L
v
hn
2
e
Y
φ
φ
φ
i
max
and thus
k
3
k
3
max
=
h
Df(φ
φ
φ
i+
1
2
)
e
Y
φ
φ
φ
i
+
k
2
2
Df(
e
φ
φ
φ
i+
1
2
)
e
Y
i
+
k
2
2
max
h
Df(φ
φ
φ
i+
1
2
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i+
1
2
)
e
Y
i
max
+
1
2
h
Df(φ
φ
φ
i+
1
2
)k
2
Df(
e
φ
φ
φ
i+
1
2
)k
2
max
hn
Df(φ
φ
φ
i+
1
2
) Df(
e
φ
φ
φ
i+
1
2
)
max
e
Y
φ
φ
φ
i
max
+ hn
Df(
e
φ
φ
φ
i+
1
2
)
max
e
Y
φ
φ
φ
i
e
Y
i
max
+
1
2
hn
Df(φ
φ
φ
i+
1
2
) Df(
e
φ
φ
φ
i+
1
2
)
max
k
2
max
+
1
2
hn
Df(
e
φ
φ
φ
i+
1
2
)
max
k
2
k
2
max
L
D f
nC
4
h
5
e
Y
φ
φ
φ
i
max
+ hnL
v
e
Y
φ
φ
φ
i
e
Y
i
max
+
1
2
L
D f
n
2
C
4
h
6
L
v
1 +
L
v
hn
2
e
Y
φ
φ
φ
i
max
+
1
2
hnL
v
h
C
9
h
5
e
Y
φ
φ
φ
i
max
+C
10
h
e
Y
φ
φ
φ
i
e
Y
i
max
i
C
11
h
5
e
Y
φ
φ
φ
i
max
+C
12
h
e
Y
φ
φ
φ
i
e
Y
i
max
,
where we have used h 1 and new constants.
Finally, for k
4
, where by (15) we have
k
3
max
nhL
v
1 +
L
v
hn
2
+
L
2
v
h
2
n
2
4
e
Y
φ
φ
φ
i
max
and thus
k
4
k
4
max
=
h
Df(φ
φ
φ
i+1
)
e
Y
φ
φ
φ
i
+ k
3
Df(
e
φ
φ
φ
i+1
)
e
Y
i
+ k
3
max
h
Df(φ
φ
φ
i+1
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i+1
)
e
Y
i
max
+ h
Df(φ
φ
φ
i+1
)k
3
Df(
e
φ
φ
φ
i+1
)k
3
max
hn
Df(φ
φ
φ
i+1
) Df(
e
φ
φ
φ
i+1
)
max
e
Y
φ
φ
φ
i
max
+ hn
Df(
e
φ
φ
φ
i+1
)
max
e
Y
φ
φ
φ
i
e
Y
i
max
+ hn
Df(φ
φ
φ
i+1
) Df(
e
φ
φ
φ
i+1
)
max
k
3
max
+ hn
Df(
e
φ
φ
φ
i+1
)
max
k
3
k
3
max
L
D f
nC
4
h
5
e
Y
φ
φ
φ
i
max
+ hnL
v
e
Y
φ
φ
φ
i
e
Y
i
max
+ L
D f
n
2
C
4
h
6
L
v
1 +
L
v
hn
2
+
L
v
hn
2
2
!
e
Y
φ
φ
φ
i
max
+ hnL
v
C
11
h
5
e
Y
φ
φ
φ
i
max
+C
12
h
e
Y
φ
φ
φ
i
e
Y
i
max
C
13
h
5
e
Y
φ
φ
φ
i
max
+C
14
h
e
Y
φ
φ
φ
i
e
Y
i
max
,
where we have used h 1 and new constants.
Putting these parts together, one can see that
e
Y
φ
φ
φ
i+1
e
Y
i+1
max
e
Y
φ
φ
φ
i
e
Y
i
max
+
1
6
k
1
k
1
+ 2(k
2
k
2
) + 2(k
3
k
3
) + k
4
k
4
max
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate
201
e
Y
φ
φ
φ
i
e
Y
i
max
1 + (nL
v
+ 2C
10
+ 2C
12
+C
14
)
h
6
+
L
D
f
nC
4
+ 2C
9
+ 2C
11
+C
13
h
5
6
e
Y
φ
φ
φ
i
max
2
e
Y
φ
φ
φ
i
e
Y
i
max
+C
8
h
5
,
where we define
C
8
:=
C
7
6
L
D
f
nC
4
+ 2C
9
+ 2C
11
+C
13
,
and we have used (20) and the fact that
h
6
nL
v
+ 2C
10
+ 2C
12
+C
14
=: h
. (22)
This shows (21), which in turn shows with
e
Y
φ
φ
φ
i
e
Y
i
max
= 0 for i = 0
and by iteration that
e
Y
φ
φ
φ
i
e
Y
i
max
(2
i
1)C
8
h
5
7C
8
h
5
=: C
15
h
5
. (23)
for i = 1,2,3.
Now the initializing steps with RK4 are finished.
For the AB4 method we follow a similar idea. For
i 3 we have
e
Y
φ
φ
φ
i+1
e
Y
i+1
max
e
Y
φ
φ
φ
i
e
Y
i
max
+
55h
24
Df(φ
φ
φ
i
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i
)
e
Y
i
max
+
59h
24
Df(φ
φ
φ
i1
)
e
Y
φ
φ
φ
i1
Df(
e
φ
φ
φ
i1
)
e
Y
i1
max
+
37h
24
Df(φ
φ
φ
i2
)
e
Y
φ
φ
φ
i2
Df(
e
φ
φ
φ
i2
)
e
Y
i2
max
+
9h
24
Df(φ
φ
φ
i3
)
e
Y
φ
φ
φ
i3
Df(
e
φ
φ
φ
i3
)
e
Y
i3
max
Using the estimate
Df(φ
φ
φ
i
)
e
Y
φ
φ
φ
i
Df(
e
φ
φ
φ
i
)
e
Y
i
max
n
Df(φ
φ
φ
i
) Df(
e
φ
φ
φ
i
)
max
e
Y
φ
φ
φ
i
max
+ n
Df(
e
φ
φ
φ
i
)
max
e
Y
φ
φ
φ
i
e
Y
i
max
nL
D
f
C
4
C
7
h
4
+ nL
v
e
Y
φ
φ
φ
i
e
Y
i
max
,
similarly to the argumentation above for RK4, for
each of the terms i,i 1,i 2, i 3 we obtain
e
Y
φ
φ
φ
i+1
e
Y
i+1
max
e
Y
φ
φ
φ
i
e
Y
i
max
(24)
+
20
3
nL
D
f
C
4
C
7
h
5
+ hnL
v
55
24
e
Y
φ
φ
φ
i
e
Y
i
max
+
59
24
e
Y
φ
φ
φ
i1
e
Y
i1
max
+
37
24
e
Y
φ
φ
φ
i2
e
Y
i2
max
+
9
24
e
Y
φ
φ
φ
i3
e
Y
i3
max
.
We have assumed that
h h
∗∗
:=
3
20nL
v
(25)
and want to show that
e
Y
φ
φ
φ
i
e
Y
i
max
C
16
2
N
h
5
= C
16
2
H
h
4
h
2
h
C
16
2
H
h
4
(26)
for i = 1,...,N; here C
16
:= max(C
15
,
20
3
nL
D
f
C
4
C
7
)
and we have used that
h
2
h
1 for all h 0. Hence,
this shows that we have a global estimate of order 4
with a constant depending on H.
To show (26) we denote a
i
=
e
Y
φ
φ
φ
i
e
Y
i
max
and
prove that
a
i
b
i
:=
ˆ
C(2
i
1),
where
ˆ
C = C
16
h
5
, for all i = 0,...,N. Note that b
i
is
the solution of the iteration b
0
= 0 and
b
i+1
= 2b
i
+
ˆ
C.
We will now show a
i
b
i
by induction with respect
to i. For i = 0, 1, 2, 3 this follows from (23). Now we
assume that for i 3 the inequality a
j
b
j
holds for
all j = 0,...,i and we show it for i + 1: by (24) we
have
a
i+1
a
i
+
20
3
nL
D
f
C
4
C
7
h
5
+ hnL
v
55
24
a
i
+
59
24
a
i1
+
37
24
a
i2
+
9
24
a
i3
b
i
+
20
3
nL
D
f
C
4
C
7
h
5
+ hnL
v
20
3
b
i
2b
i
+
ˆ
C = b
i+1
,
where we have used h 3/(20nL
v
), h 1 and the
induction assumption. This shows the induction and
thus (26).
Finally, we obtain for the error, using (19) and
(26), that
Y
i
e
Y
i
max
Y
i
e
Y
φ
φ
φ
i
max
+
e
Y
φ
φ
φ
i
e
Y
i
C
5
h
4
+C
16
2
H
h
4
. (27)
for all i = 1,...,N, hence a global error of order 4.
The algorithm to approximate the solutions to the
initial-value problems (3) and (4) can now be summa-
rized as:
1. Fix the time-horizon H and the number of time-
steps N.
2. For the initiation phase for the AB4 multi-step
method, fix
e
x
0
= ξ
ξ
ξ and set h
0
=
1
2
H/N.
3. Use the RK4 formula (9) with h = h
0
to compute
e
x
i+1
for i = 0,1,2,3,4,5.
ICINCO 2023 - 20th International Conference on Informatics in Control, Automation and Robotics
202
4. Relabel the solution terms using
e
φ
φ
φ
i/2
=
e
x
i
for i =
0,1, . . . , 6, e.g.
e
φ
φ
φ
1
2
=
e
x
1
,
e
φ
φ
φ
1
=
e
x
2
etc.
5. Set
e
Y
0
= I, h = H/N, and use the RK4 formula
(15) to compute
e
Y
i+1
for i = 0,1,2.
6. Now the initialization phase for the AB4 method
is over and we have
e
φ
φ
φ
i
and
e
Y
i
at our disposal for
i = 0,1,2,3. Set h = H/N and use formulas (10)
and (18) for i = 3,4,...,N 1 to compute the re-
maining
e
φ
φ
φ
i
and
e
Y
i
.
Remark 3.1. The two following observations are use-
ful for our approach.
a. Note that since we perform the computations on a
compact set S, that is positively invariant for both
the system (1) and the numerical integrator, we
have finite upper bounds on the absolute values
of continuous s-th order derivatives of the compo-
nents of f if f is C
s
on the convex hull of S. Thus,
for s = 5 we can choose C
j
for j = 1,2,...,5 and
also L
v
as uniform constants for the whole inter-
val [0,H] independent of the time step t
i
and ξ
ξ
ξ S.
b. It is not necessary to keep track of more than just
the four most recent values of
e
φ
φ
φ
i
and
e
Y
i
in step 6
to use formulas (10) and (18).
3.3 Step III: Numerical Quadrature
The numerical quadrature of formula (2) can be done
on the fly as explained below. Effectively, this can be
implemented by interpreting i modulo 4. Since the
numerical solutions to (3) and (4) are O(h
4
), there
is little additional accuracy gained by using a higher-
order formula than O(h
4
) for the numerical quadra-
ture of (2). However, by using the Composite Trape-
zoidal Rule and a Romberg-like extrapolation, one
can use an O(h
2(p+1)
) method with negligible over-
head, where N = 2
p
q, p,q N
+
, and h is the step
size for both the numerical integration and the numer-
ical quadrature. This method and its implementation
is explained in detail in (Hafstein, 2019, Sec. III). We
review the essential parts here.
Approximations to the solutions of system (3) and
system (4) with a particular initial value φ
φ
φ(0,ξ
ξ
ξ) = ξ
ξ
ξ
and Y (0) = I are computed at N + 1 equally dis-
tributed time points on the time interval [0, H]:
e
φ
φ
φ
i
φ
φ
φ(t
i
,ξ
ξ
ξ) and
e
Y
i
Y (t
i
) at t
i
=
iH
N
(28)
for i = 0, 1, . ..,N, where N = 2
p
q, p,q N
+
; how
this is done and how accurate these values are was
explained in Step II.
We first consider the quadrature rule error assum-
ing that we use the true values for φ
φ
φ and Y ; later we
consider the error caused by using the approximate
values
e
φ
φ
φ and
e
Y . We split the integral in (2) into two
parts
M(ξ
ξ
ξ) = I
ξ
ξ
ξ
+
Z
H
Y (τ)
T
C(φ
φ
φ(τ,x))Y (τ)dτ, where
I
ξ
ξ
ξ
:=
Z
H
0
Y (τ)
T
C(φ
φ
φ(τ,x))Y (τ)dτ,
and I
ξ
ξ
ξ
is approximated by
e
M(ξ
ξ
ξ) using formula (5) and
a variant of the Romberg integration. Set
α(τ,ξ
ξ
ξ) = Y (τ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))Y (τ) (29)
and α
i
:= α(t
i
,ξ
ξ
ξ) for i = 0, 1, . . . ,N. First, we use
the composite Trapezoidal rule to approximate I
ξ
ξ
ξ
:=
M(ξ
ξ
ξ) using N, N/2,...,q intervals. For this, de-
fine recursively N
0
:= N and N
k+1
:= N
k
/2 for k =
0,. . . , p 1; note that N
p
= q. Define h
k
:= H/N
k
for
k = 0,..., p. We set
Trap
k
= h
k
α
0
+ α
N
R
2
+
N
k
1
j=1
α
j2
k
!
(30)
for k = 0, 1, . . . , p. It is well known, cf. e.g. (Bauer
et al., 1963), that if the integrand is C
2(p+1)
, which
is the case if f C
2(p+1)+1
and C C
2(p+1)
, that by
extrapolation using the tableau
R
r,0
:= Trap
r
for r = 0,1,..., p (31)
and then for s = 1,..., p,
R
r,s
=
4
s
R
r,s1
R
r+1,s1
4
s
1
for 0 r p s, (32)
we get that
R
0,p
I
ξ
ξ
ξ
max
C
R
Hh
2(p+1)
max
t[0,H]
2(p+1)
t
2(p+1)
α(t,ξ
ξ
ξ)
max
for a constant C
R
, independent of H and α. Here
h = h
0
= H/N = H/(2
p
q) is the length of the inter-
val between two consecutive time-points the solution
is computed at.
Finally, we substitute the values α
i
by
e
α
i
:=
e
Y
T
i
C(
e
φ
φ
φ
i
)
e
Y
i
into the formulas (30) and denote the cor-
responding R
r,s
by
e
R
r,s
. By the estimates (27) and (13)
and because C is Lipschitz on S we have for a fixed
H > 0 that α
i
e
α
i
max
C
I
h
4
, where C
I
> 0 is a
constant that can be chosen independently of ξ
ξ
ξ S.
Further, R
r,s
=
N
i=0
λ
i
α
i
where λ
i
0 and
N
i=0
λ
i
= 1,
see (Bauer et al., 1963), and thus R
r,s
e
R
r,s
max
(C
R
+C
I
)h
4
independent of ξ
ξ
ξ S if p 1. Thus, there
is a constant C
H
independent of h > 0 and ξ
ξ
ξ S, but
dependent on H, such that
I
ξ
ξ
ξ
e
R
r,s
max
C
H
h
4
(33)
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate
203
for h min(h
,1, h
,h
∗∗
).
To summarize all these discussions, let us provide
the error estimate for the numerical computation of
the contraction metric M.
4 MAIN RESULT
After all the preparation in the last two sections, we
are ready to prove the main result of this work.
Theorem 4.1 (Error estimate). Assume f in (1) is
C
2(p+1)+1
and C is C
2(p+1)
for an integer p 1 and
let M be defined by formula (2) on A (x
0
), i.e.
M(ξ
ξ
ξ) =
Z
0
ψ(τ,ξ
ξ
ξ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))ψ(τ,ξ
ξ
ξ)dτ.
Then, for any compact K A(x
0
) and ε > 0, there
exists H
> 0 such that for all fixed and finite H H
there exist N
= 2
p
q
, p,q
N
+
, such that for all
N = 2
p
q, q q
, we have
M(ξ
ξ
ξ)
e
M(ξ
ξ
ξ)
max
ε
for all ξ
ξ
ξ K. Here h := H/N and
e
M(ξ
ξ
ξ) is the result
of the numerical method, i.e. the matrix
e
R
0,p
, com-
puted as described in Section 3 with initial value ξ
ξ
ξ,
and using the interval H and the approximations
e
φ
and
e
Y with step-size h in the numerical integration
and quadrature.
Proof. Let K A (x
0
) and ε > 0 be given. By (Giesl
et al., 2023d, Thms. 2.1 and 3.5) there exists a com-
pact S R
n
, K S A (x
0
) and h
> 0, such that
S is positively invariant for system (1) and the AB4
method initialized with RK4 to approximate its tra-
jectories for all step sizes 0 < h h
, i.e.
e
φ
φ
φ
i
S for all
i N
0
if
e
φ
φ
φ
0
= ξ
ξ
ξ S and the step-size is h.
The proof of (Giesl and Wendland, 2019,
Thm. 2.2, (13), (14), and (15)) implies that there are
constants d, d
1
,d
2
,ρ > 0 such that
ψ(τ,ξ
ξ
ξ)
max
d
1
e
ρτ
(34)
C(φ
φ
φ(τ,ξ
ξ
ξ))
max
d
2
(35)
ψ(τ,ξ
ξ
ξ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))ψ(τ,ξ
ξ
ξ)
max
de
2ρτ
(36)
for all τ 0 and all ξ
ξ
ξ S. Note that the proof also
holds for the case of a compact subset of A (x
0
). The
reason is that a compact subset of the basin of attrac-
tion is uniformly attracted to x
0
, which implies that
uniform constants can be chosen in (Giesl and Wend-
land, 2019, (13), (14), and (15)) which only depend
on S. Let H
> 0 be so large that
Z
H
ψ(τ,ξ
ξ
ξ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))ψ(τ,ξ
ξ
ξ)dτ
max
d
2ρ
e
2ρH
ε
2
. (37)
Fix some H H
. Choose N
= 2
p
q
so large that
both
˜
h := H/N
min(h
,1) and C
H
(
˜
h)
4
ε
2
hold,
where C
H
is the constant from (33). Then we have
Z
H
0
ψ(τ,ξ
ξ
ξ)
T
C(φ
φ
φ(τ,ξ
ξ
ξ))ψ(τ,ξ
ξ
ξ)dτ
e
R
0,p
max
C
H
(
˜
h)
4
ε
2
(38)
at any point ξ
ξ
ξ K A (x
0
) R
n
. Here
e
R
0,p
is the
Romberg approximation of the integral
Z
H
0
α(τ,ξ
ξ
ξ)dτ,
with α defined in (29), where the values α
i
:= α(t
i
,ξ
ξ
ξ),
t
i
:= i
˜
h, have been substituted by the approximations
e
α
i
, computed using the numerical solutions
e
φ
φ
φ and
e
I to
τ 7→ φ
φ
φ(τ,x) and I(τ) = ψ
ψ
ψ(τ,x) as described in Step I
and Step II and using step-size
˜
h. For any step-size
h := H/N, N = 2
p
q where q q
is an integer, an
analogous estimate holds with
˜
h replaced by h
˜
h
and the proposition follows.
5 CONCLUSIONS
We have shown that contraction metrics for systems
˙
x = f(x) with an exponentially stable equilibrium x
0
can be computed with arbitrary precision. In particu-
lar, we have proven that the error in the computations
can be uniformly bounded on compact subsets of the
basin of attraction A (x
0
). The bound is uniform in
the sense that given a compact subset K A (x
0
) and
an ε > 0, we can choose the parameters of the nu-
merical method, i.e. the time interval H and the step
size h, such that the difference between the computed
metric
e
M(ξ
ξ
ξ) and the metric M(ξ
ξ
ξ) from formula (2) is
bounded by ε for all ξ
ξ
ξ K. This result advances the
error analysis of the computation of contraction met-
rics, and justifies the use of these methods in practice.
ACKNOWLEDGEMENT
The research done for this paper was partially sup-
ported by the Icelandic Research Fund in the grant
228725-051, Linear Programming as an Alternative
to LMIs for Stability Analysis of Switched Systems,
which is gratefully acknowledged.
ICINCO 2023 - 20th International Conference on Informatics in Control, Automation and Robotics
204
REFERENCES
Aminzare, Z. and Sontag, E. (2014). Contraction methods
for nonlinear systems: A brief introduction and some
open problems. In Proceedings of the 53rd IEEE Con-
ference on Decision and Control, pages 3835–3847.
Aylward, E., Parrilo, P., and Slotine, J.-J. (2008). Stability
and robustness analysis of nonlinear systems via con-
traction metrics and SOS programming. Automatica,
44(8):2163–2170.
Bauer, F., Rutishauser, H., and Stiefel, E. (1963). New as-
pects in numerical quadrature. In Proceedings of Sym-
posia in Applied Mathematics: Experimental Arith-
metic, Hight Speed Comoputing and Mathematics,
volume 15, pages 199–218. AMS.
Borg, G. (1960). A condition for the existence of orbitally
stable solutions of dynamical systems. Kungl. Tekn.
H
¨
ogsk. Handl. 153.
Demidovi
ˇ
c, B. (1961). On the dissipativity of a certain
non-linear system of differential equations. I. Vestnik
Moskov. Univ. Ser. I Mat. Meh., 1961(6):19–27.
Deuflhard, P. and Hohmann, A. (2008). Numerische Math-
ematik 2. de Gruyter, 4th edition.
Forni, F. and Sepulchre, R. (2014). A differential Lyapunov
framework for Contraction Analysis. IEEE Transac-
tions on Automatic Control, 59(3):614–628.
Giesl, P. (2015). Converse theorems on contraction metrics
for an equilibrium. J. Math. Anal. Appl., (424):1380–
1403.
Giesl, P. (2019). Computation of a contraction metric for
a periodic orbit using meshfree collocation. SIAM J.
Appl. Dyn. Syst., 18(3):1536–1564.
Giesl, P. and Hafstein, S. (2013). Construction of a CPA
contraction metric for periodic orbits using semidefi-
nite optimization. Nonlinear Anal., 86:114–134.
Giesl, P., Hafstein, S., Haraldsdottir, M., Thorsteinsson, D.,
and Kawan, C. (2023a). Subgradient algorithm for
computing contraction metrics for equilibria. J. Com-
put. Dynamics, 10(2):281–303.
Giesl, P., Hafstein, S., and Kawan, C. (2023b). Review on
contraction analysis and computation of contraction
metrics. J. Comput. Dynamics, 10(1):1–47.
Giesl, P., Hafstein, S., and Mehrabinezhad, I. (2021a).
Computation and verification of contraction metrics
for exponentially stable equilibria. J. Comput. Appl.
Math., 390:Paper No. 113332.
Giesl, P., Hafstein, S., and Mehrabinezhad, I. (2021b).
Computation and verification of contraction metrics
for periodic orbits. J. Math. Anal. Appl., 503(2):Pa-
per No. 125309, 32.
Giesl, P., Hafstein, S., and Mehrabinezhad, I. (2023c). Con-
traction metric computation using numerical integra-
tion and quadrature. Submitted.
Giesl, P., Hafstein, S., and Mehrabinezhad, I. (2023d). Pos-
itively invariant sets for ODEs and numerical integra-
tion. In Proceedings of the 20th International Con-
ference on Informatics in Control, Automation and
Robotics (ICINCO), page (submitted).
Giesl, P. and Wendland, H. (2019). Construction of a con-
traction metric by meshless collocation. Discrete Con-
tin. Dyn. Syst. Ser. B, 24(8):3843–3863.
Hafstein, S. (2019). Numerical ODE solvers and integration
methods in the computation of CPA Lyapunov func-
tions. In 18th European Control Conference (ECC),
pages 1136–1141. IEEE.
Hartman, P. (1961). On stability in the large for systems
of ordinary differential equations. Canadian J. Math.,
13:480–492.
Hartman, P. (1964). Ordinary Differential Equations. Wi-
ley, New York.
Krasovski
˘
i, N. N. (1963). Problems of the Theory of Stabil-
ity of Motion. Mir, Moskow, 1959. English translation
by Stanford University Press.
Lewis, D. (1951). Differential equations referred to a vari-
able metric. Amer. J. Math., 73:48–58.
Lewis, D. C. (1949). Metric properties of differential equa-
tions. Amer. J. Math., 71:294–312.
Lohmiller, W. and Slotine, J.-J. (1998). On Contrac-
tion Analysis for Non-linear Systems. Automatica,
34:683–696.
Sauer, T. (2012). Numerical Analysis. Pearson, 2nd edition.
Simpson-Porco, J. and Bullo, F. (2014). Contraction the-
ory on Riemannian manifolds. Systems Control Lett.,
65:74–80.
Walter, W. (1998). Ordinary Differential Equation.
Springer.
Contraction Metrics by Numerical Integration and Quadrature: Uniform Error Estimate
205