Semiglobal Asymptotic Stabilization of a Class of
Nonlinear Sampled-data Systems using Emulated
Controllers
Elena Panteley and Romain Postoyan
Laboratoire des Signaux et Systemes, CNRS
3 rue Joliot Curie, 91192 Gif-sur-Yvette, France
Abstract. Considering nonlinear sampled-data systems, it has been shown in
[14] - that emulating a continuous-time controller that ensures some global asymp-
totic stability properties in continuous-time. In this study, we provide a similar
result, for a general class of systems, using a hybrid formulation that allows de-
riving explicit bounds on the maximum allowable sampling period.
1 Introduction
A number of researches focused on the stabilization problem of nonlinear sampled-data
systems during the last decades (see the overview [13] and [14] and the references cited
therein). A common approach consists in emulating a known continuous-timecontroller
using a sample-and-hold device. Based on discrete-time model approximations and us-
ing results of [17], it has been shown in [14] that, by choosing a sufficiently small sam-
pling period, asymptotic stability properties are recovered in an appropriate practical
sense, under mild conditions. Practical state convergence might be an issue in practice,
especially when the sampling period cannot be taken small enough. It is also important
for engineers to know an explicit bound on the sampling period that can be taken so that
designed controllers ensure the desired asymptotic state convergence. Thus, a number
of papers propose solutions for the asymptotic stabilization of nonlinear sampled-data
systems and the knowledge of an explicit bound on T
MASP
. In most of these works,
global asymptotic stability properties are studied. Two exceptions are however available
in the literature. First in [9] where a hybrid stabilization method is proposed for some
classes of systems: it consists in decomposing the state space in a number of regions
for which a controller is designed in order to reach the next region that is closer to the
origin. A semiglobal asymptotic stability property is shown to hold for system in output
feedback form in [21] but no explicit bound on T
MASP
is given. Concerning results on
global asymptotic stability properties for nonlinear systems, some papers are available
in the literature. In [4], global Lipschitz conditions on system and static state-feedback
nonlinearities are supposed to apply, thus the global exponential stability of the system
origin is recovered under sampling. In [1], considering the Euler approximation of a dy-
namic feedback controller,Lyapunov stability results for impulsive systems are applied,
under similar conditions than in [4]. A small gain theorem for a class of hybrid systems
that does not satisfy the classical semi-group property is developed in [7] that allows to
Panteley E. and Postoyan R. (2009).
Semiglobal Asymptotic Stabilization of a Class of Nonlinear Sampled-data Systems using Emulated Controllers.
In Proceedings of the International Workshop on Networked embedded and control system technologies: European and Russian R&D cooperation,
pages 81-93
Copyright
c
SciTePress
design discrete-time controllers for classes of nonlinear systems. The same authors in
[9] derive an analytic bound on T
MASP
when using emulated controllers, by modeling
sampled-data systems as time-delay systems. Recently, techniques firstly developed for
networked systems have been applied to the stabilization problem of nonlinear sampled-
data systems [16]. Writing nonlinear sampled-data systems with emulated controllers
as hybrid systems in the modeling framework of [3,2], sufficient Lyapunov-type condi-
tions are proposed and an explicit bound on the T
MASP
is given.
In this study, considering a known controller that is supposed to ensure the input-to-
state stability w.r.t. measurement errors of the closed-loop system in continuous-time,
it is shown that the emulated controller will ensure the asymptotic stability of system
origin if the sampling period satisfy an explicit boundedness condition. Similarly to
[16], the system is written as the interconnection of the continuous-time closed-loop
system and the ‘error’ system due to the sampling. The stability analysis relies on tra-
jectory based arguments and the Lyapunov-like analysis to ensure bounds on the state
and sampling error.
2 Notations
The Euclidean norm of a vector is denoted by | · |, for a function f : R R
n
and
t
1
t
2
R, kfk
[t
1
,t
2
)
stands for sup
τ [t
1
,t
2
)
|f(τ)|. Let C(R
p
, R
q
), p, q N, denote
the space of all continuous mapping R
p
R
q
. B
d
ofR
n
denotes the open ball
centered at 0 and of radius d. For initial conditions we use notations t
0, x
=
x(t
), e
= e(t
), finally, to simplify the notations we sometimes omit the arguments
and when it is clear from the context, we write V (x(t)), or even V (t) in place of of
V (x(t, t
, x
)).
3 Problem Statement
Consider a system:
˙x
p
= f
p
(x
p
, u), (1)
y = h
p
(x
p
), (2)
where x
p
R
n
x
p
denotes the state vector of the plant, u R
n
u
the input vector,
y R
n
y
the output vector, n
x
p
, n
u
, n
y
N, f
p
R
n
x
p
× R
n
u
R
n
x
p
is locally Lip-
schitz with f
p
(0, 0) = 0, and h
p
R
n
x
p
R
n
y
is differentiable, its partial derivatives
are locally Lipschitz and h
p
(0) = 0.
The following dynamic output-feedback controller is considered for the system (1)-(2):
˙x
c
= f
c
(x
c
, y), (3)
u = h
c
(x
c
, y), (4)
where x
c
R
n
x
c
denotes the state vector of the controller, n
x
c
N, f
c
R
n
x
c
×
R
n
y
R
n
x
c
is locally Lipschitz with f
c
(0, 0) = 0 and h
c
R
n
x
c
R
n
u
is differ-
entiable with locally Lipschitz partial derivatives and h
c
(0) = 0. For the sake of gener-
ality, all the results are stated for the system (1)-(4), but they apply also for the case of
output or state static feedbacks. Denoting x = [x
p
, x
c
]
R
n
x
, n
x
= n
x
p
+ n
x
c
, the
following assumption is supposed to apply throughout the paper.
Assumption A 1. The origin x = 0 is globally asymptotically stable for the closed-loop
system (1)-(4).
Attention is focused on the case where the input u and the measure vector y are sampled
at the same instants {t
k
}
kN
using a sample-and-hold device. In the sequel we will use
the following assumption on the sampling instants.
Assumption A 2. Sequence of sampling instants {t
k
}, k N satisfies the following:
(i) There exist positive constants υ, T
max
R
>0
such that υ t
k+1
t
k
T
max
for
all k 0.
(ii) The sequence {t
k
}
kN
0
is unbounded.
Remark. Assumption A2 allows the sampling sequence to be non-uniform. The lower
boundedness condition on the sampling periods is not restrictive since υ can be taken
arbitrarily small.
Considered sampled-data system can be rewritten in the following way, for k N and
t (t
k
, t
k+1
],
˙x = f(x, e), (5)
˙e = g(e, x), (6)
for t = t
k
,
x(t
+
k
) = x(t
k
), (7)
e(t
+
k
) = 0, (8)
where e = x x(t
k
), f = [f
p
(x
p
, h
ce
)
, f
c
(x
c
, h
pe
)
]
, h
pe
(x, e) = h
p
(x(t
k
)) =
h
p
(x
p
e
p
), h
ce
(x, e) = h
c
(x
c
(t
k
), y
k
) = h
c
(x
c
e
c
, h
pe
) and g(e, x) = f(x, e) .
Due to the properties of the functions f
p
, f
c
, h
p
, h
c
thus introduced functions f and
g are locally Lipschitz. Since by assumption A2 the sampling sequence is not gener-
ated independently, the system (5)-(8) satisfies the classical semigroup property (see
Example 2.12 in [7]).
The proposed presentation of the sampled data system is similar to this of [16] with
the difference in the definition of the variable e.
Our objective is to establish certain stability properties of the system (5-8) in case where
Assumptions A1-2 are satisfied. Namely we are interested in semi-global stability prop-
erty defined next.
Definition 1. System (5-8) is said to be Semi-Globally Asymptotically Stable (SGAS)
with respect to T if for all R
>0
, there exist T
maxR
>0
, β KL such that for all
T [υ, T )
max
, x(t
) B
and for all t [t
, ) the following inequality holds:
|[x(t)
, e(t)
]| β(|[x(t
)
, e(t
)
]|, t t
). (9)
If (9) holds for = , then system (5-8) is said to be Globally Asymptotically Stable
(GAS).
The approach we use is quite similar to the one proposed in [8] for design of hybrid
observers for sampled-data systems. Indeed, similar to [8] an ISS-like property with re-
spect to the measurement errors is exploited for the stability analysis. Actually, we base
our analysys on the following theorem which is similar to the result given in Theorem
2 of [20] but in our case the bound on the possible input does not depend on the system
initial condition but rather on the radius of the ball of initial conditions for the state
and a chosen overshoot.
Theorem 1. Consider the system
˙x = f(x, u), (10)
where x R
n
and function f : R
n
× R
m
R
n
is continuous and locally Lipschitz.
Let R
>0
be arbitrary and x
B
. If the system (10) is GAS with the input
u 0, then there exist function β KL, a continuous positive definite function δ :
R
0
R
0
and for each > 0 there exists function γ
K such that for any
t
, t 0, t t
and each measurable, essentially bounded input u(·) for which
kuk
[t
, t)
< δ(), (11)
the solution of (10) exists at least for τ [t
, t) and satisfies on this interval the follow-
ing bound
|x(τ) β(|x|
, τ t
) + γ
(kuk
[t
, t)
). (12)
Proof. Since the origin x = 0 is GAS for the system ˙x = f(x, 0), then it follows
from Proposition 13 in [18] (see also [22]) that there exist functions α
1
, α
2
K
and
α
3
K and a Lyapunov function V C
1
(R
n
, R) such that for all x R
n
we have
α
1
(|x|) V (x) α
2
(|x|),
V
x
f(x, 0) V (x), |
V
x
(x)| α
3
(|x|).
Then, for the system (10) we have
V
x
f( x, u) V (x) +
V
x
[f(x, u) f( x, 0)] V (x) + α
3
(|x|)|f(x, u) f(x, 0)|
Since the function f is continuous, it follows from the Lemma 2 that there exist a
strictly increasing function c C(R, [1, )) and a function d K such that |f(x, u)
f(x, 0)| c(|x|)d(|u|) and therefore we have
V
x
f(x, u) V (x) + c
1
(|x|)d(|u|),
where c
1
(s) = α
3
(s)(s). Notice that the function c
1
K
Let ∆, ǫ > 0 and x
B
be fixed and arbitrary otherwise, define functions δ
1
and
ψ K
as follows
ψ(s) = (1 + ǫ)α
1
1
α
2
(s), δ(s) =
α
1
ψ(s) α
2
(s)
c (ψ(s))
.
Functions α
1
, α
2
K
and α
2
(s) α
1
(s) hence we have that function ψ K
and
ψ(s) > α
1
1
α
2
(s) > s for all s > 0, therefore α
1
ψ(s) α
2
(s) > 0 for all s > 0.
Since c(s) 1 for all s 0, function δ defined above is a continuous, positive definite
function.
Claim 1. If the input satisfies the bound (11) for τ [t
, t), then it holds that
kxk
[t
, t)
ψ(). (13)
Proof of the Claim 1. We proceed by contradiction. Assume that there exists t
[t
, t)
such that |x(t
)| = ψ() and let t
1
= inf{τ [t
, t) : |x(τ)| = ψ()}.
Then for all τ [t
, t
1
] we have that
˙
V
(
10) =
V
x
f(x, e) V (x) + c(ψ())d(|e|),
using the comparison principle [10] we obtain that for all τ [t
, t
1
] we have
V (x(τ)) V (x
)e
(τ t
)
+
Z
τ
t
c(ψ())d(||e||
[t
)
) exp(τ )
V (x
)e
(τ t
)
+ c(ψ())d(||e||
[t
)
). (14)
Combining the last inequality with (13) we obtain that for all τ [t
0
, t
1
]
V (x(τ)) V (x
) + α
1
(ψ()) α
2
() < α
1
(ψ()).
Thus, V (x(t
1
)) < α
1
(ψ()) which implies that |x(t
1
)| < ψ() and we came to the
contradiction with the initial assumption that |x(t
1
)| = ψ() and hence Claim 1 is
proved.
Next, since for any τ [t
, t) we have that |x(τ)| ψ() then it follows from (14)
and properties of the function V (x) that on the same interval
α
1
(kx(τ)k) V (x(τ)) V (x
)e
(tt
)
+ c(ψ())d(||e||
[t
)
)
α
2
(kx
k)e
(tt
)
+ c(ψ())d(||e||
[t
)
)
and therefore
kx(τ)k α
1
1
α
2
(kx
k)e
(tt
)
+ c(ψ())d(||e||
[t
)
)
α
1
1
2α
2
(kx
k)e
(tt
)
+ α
1
1
2c
d(||e||
[t
)
)
,
where c
= c(ψ()).
Since α
1
, α
2
K
and d K, it is clear from the last inequality that there exists
function β KL and for each > 0 there exists function γ
K such that for all
t [t
, t) the bound (12) is satisfied.
4 Main Results
As mentioned in the Introduction, it is well known that the sampling of the system
output and the control input is usually source of instability and that the only possibility
to overcome this issue consists in restricting the upper bound on the sampling period.
The effect of the sampling is mostly due to the dynamics of the variable e. Thus, it is
interesting to estimate an upper bound of this variable taking into account the fact that
e(t
+
k
) = 0, k 1, i.e. we start every sampling period with zero initial condition for this
variable
Lemma 1. Consider the system (10) and assume that the function f is continuous,
locally Lipschitz and f (0, 0) = 0. Then, for any µ R
>0
there exist a C
1
function
W : R
0
R
0
with bounded W (x)/∂x and a C
1
function γ K such that for all
(x, e) R
n
× R
m
W
x
(x), f(x, u) µW (x) + γ(|u|). (15)
The proof of the lemma 1 is presented in the appendix. It shows that the function ρ is
not necessarily unbounded. Thus, according to Lemma 1, for any µ R
>0
, there exist
¯α R
>0
{∞} such that ρ : R
0
[0 , ¯α) of class K (K
if ¯α = ).
Remark. Lemma 1 is similar to Lemma 11 in [18], but here, instead of finding an expo-
nentially decreasing positive definite function of the state, an exponentially increasing
one is obtained.
In the remaining part of the paper we assume that for the system (6) a function W is
constructed according to Lemma 1 with a constant µ R
>0
given. Note that, since W
is locally Lipschitz, using the arguments given in the footnote 8 in [15], this holds for
almost all (x, e) R
n
x
+n
e
, along solutions to (6):
˙
W (e) µW (e) + γ(|x|). (16)
The following proposition considers the case when subsystem (5) is ISS and gives the
conditions under which there exists T
max
such that the system (5)-(8) is GAS if the
maximal sampling period is less than T
max
.
We start with introduction of the following assumption which will be used to ensure
that the solutions of the sampled data system do not explose during the first sampling
period.
Assumption A 3. The system
˙x = f(x, x + c
e
) (17)
is forward complete for any parameter c
e
R
n
.
Remark 1. From the Theorem 2, [23] it follows that assumption A3 is equivalent to
assuming existence of a proper and smooth function function Ψ(x) : R
n
R
0
such
that along solutions of (17) we have
˙
Ψ (18)
for any c
e
R
n
.
Remark 2. Assumption A3 can actually be replaced by the equivalent assumption on
forward completeness for the system ˙e = g(e, e + c
x
). Choice of the assumption de-
pends rather on the simplicity to verify the assumptions for these two systems.
Theorem 2. Consider the system (5)-(8) and let assumptions A1- A3 hold. Suppose
that for the system (5)-(6) there exist positive definite functions V , W : R
n
R
0
,
functions α
iv
, g
v
, g
w
K
, α
iw
K, i = 1, 2 and positive constants µ and σ such
that along solutions of the system (5)-(6) we have
α
1v
(|x|) V (x) α
2v
(|x|) (19)
α
1w
(|e|) W (e) α
2w
(|e|) (20)
˙
V σV + g
v
(|e|) (21)
˙
W µW + g
w
(|x|), (22)
and functions g, α satisfy the following linear gain conditions
g
v
α
1
1w
(s) k
1
s (23)
g
w
α
1
1v
(s) k
2
s, (24)
where k
1
, k
2
are positive constants. Then if T
max
from the assumption A2 satisfies the
inequality T
max
< T
, where T
=
1
µ+σ
ln
1 +
σ(σ+µ)
k
1
k
2
, then the system (5)-(8) is
GAS.
Proof. We start the proof with the remark that there is important difference between the
first sampling interval and the rest of the sequence since it is only at the beginning of the
1st sampling interval we can have that e(t
) 6= 0 while for all other intervals (k 1)
we have e(t
+
k
) = 0 , see (8). Therefore, we will teat here these two cases separately and
later combine the results together. We start with the case of the first sampling interval.
Case I.
k = 0. On the interval [t
, t
1
) we can rewrite the system (5)-(6) as follows:
˙x = f(x, x + e
)
˙e = g(e, e + x
).
Due to assumption A3 there exists a function Ψ : R
n
R
0
such that (18) is satisfied,
hence for any initial conditions (x
, e
) we have that
˙
Ψ Ψ and therefore, during the
interval [t
, t
1
) [t
, t
+ T
) we have that Ψ(x(t, x
, e
), e
) Ψ(x
, , e
)e
T
. Since
function Ψ is proper and positive definite, there exist functions α
K
, i = 1, 2
such that α
1ψ
(|x|) V (|x|) α
2ψ
(|x|), thus for all t [t
, t
1
]
|x(t), e(t)| α
1
1ψ
α
2ψ
(|x
, e
|)e
T
. (25)
Case II.
k 1. This part of the proof is based on the following two observations:
– starting with k = 1 we have that at the beginning of each sampling period e(t
+
k
) = 0
and therefore we can use (22) to estimate the error e(t) during the sampling period.
– to ensure asymptotic stability it is enough to show that there exists a Lyapunov func-
tion V (x) such that for any k 1 and any t (t
k
, t
k+1
] we have
V (x(t)) V (x(t
k
)) (26)
and moreover, there exists ε > 0 such that
V (x
k+1
) εV (x(t
k
)). (27)
Notice that condition (26) insures Lyapunov stability of solutions, while (27) en-
sures decreas of the Lyapunov function during each sampling period and thus it’s con-
vergenceto zero. From convergenceto zero of the sequence V (x
k
) follows convergence
to zero of the x(t
k
), hence of x(t) and therefore of the differences e(t) = x(t) x(t
k
).
Thus we only need to ensure that conditions of the theorem guarantee that during
any sampling period of the lenght less than T
inequalities (26), (27) are satisfied. In
order to prove (26) we proceed by contradiction. We assume that there exists k 1
such that (26) is not true and t
(t
k
, t
k+1
) is the first moment such that V (x(t
)) =
V (x(t
k
)).
Let t (t
k
, t
]. Since e(t
+
k
) = 0, then from (22) it follows that
W (e(t)) e
µ(tt
+
k
)
Z
t
t
k
e
µτ
g
w
(|x(τ )|)
e
µ(tt
+
k
)
Z
t
t
k
e
µτ
g
w
α
1
1v
(V (x(τ ))) k
2
e
µ(tt
+
k
)
Z
t
t
k
e
µτ
g
w
V (x(τ ))dτ
By assumption, for τ (t
k
, t
] we have that V (x(τ)) V (x(t
k
)) and therefore we
conclude that
W (t)
k
2
µ
V (x(t
k
))
e
µ(tt
k
)
1
. (28)
In a similar way, from (21) we obtain that
V (t) V (t
k
)e
σ(tt
k
)
+ e
σ(tt
k
)
Z
t
t
+
k
e
στ
g
v
(|e(τ)|)dτ
V (t
k
)e
σ(tt
k
)
+ k
1
e
σ(tt
k
)
Z
t
t
+
k
e
στ
W (e(τ ))
V (t
k
)e
σ(tt
k
)
+
k
1
k
2
µ
V (t
k
)e
σ(tt
k
)
Z
t
t
+
k
e
στ
e
µ(τ t
k
)
1
,
where we used (28) in the last inequality.
After simple but tedious calculations we obtain that
V (t) V (t
k
)f(t), (29)
where
f(t) =
k
1
k
2
µ(µ + σ)
e
µ(tt
k
)
+
1 +
k
1
k
2
µ(µ + σ)
e
σ(tt
k
)
k
1
k
2
µσ
. (30)
Notice that f (t
+
k
) = 1, while during the interval [t
+
k
, t
+
k
+T
) derivativeof f (t) satisfies
the following bound
f
(t) e
σ(tt
+
k
)
(σ +
k
1
k
2
µ + σ
) +
k
1
k
2
µ + σ
e
(µ+σ)T
max
< e
σ(tt
+
k
)
(σ +
k
1
k
2
µ + σ
) +
k
1
k
2
+ σ(µ + σ)
µ + σ
= 0
and therefore for all t (t
k
, t
k
+ T
) we have that f(t) < 1
1
. Now, since t
(t
k
, t
k+1
), we have that t
t
k
+ T
max
< t
k
+ T
and therefore from (29) it fol-
lows that
V (t
) V (t
+
k
)f(t
) < V (t
+
k
) = V (t
k
)
and we came to the contradiction. Hence the estimate (26) is satisfied during any sam-
pling interval (t
k
, t
k+1
]. Next, let ε = f(T
max
). Since T
max
< T
, we have that ε < 1
and then from (29) we obtain that on any sampling interval
V (t
k+1
) V (t
k
)f(t
k+1
) V (t
k
)f(T
max
) εV (t
k
)
and so the bound (27) is satisfied for any sampling period (t
k
, t
k+1
].
5 Conclusions
In this paper, for a general class of nonlinear systems we presented a result on asymp-
totic stability of a continuous time system in a closed loop with an emulated controller.
We use a hybrid formulation that allows to give explicit bounds on the maximum al-
lowable sampling period.
References
1. L. Burlion, T. Ahmed-Ali, and F. Lamnabhi-Lagarrigue. On the stability of a class of nonlin-
ear hybrid systems. Nonlinear Analysis, 65(12):22362247, 2006.
2. C. Cai, R.G. Sanfelice, and A.R. Teel. Hybrid dynamical systems: robust stability and con-
trol. In CCC07 (Chinese Control Conference), pages 2936, 2007.
3. R. Goebel and A.R. Teel. Solution to hybrid inclusions via set and graphical convergence
with stability theory applications. Automatica, 42:573587, 2006.
4. G. Hermann, S.K. Spurgeon, and C. Edwards. Discretization of sliding mode based con- trol
schemes. In CDC99 (IEEE Conference on Decision and Control) Phoenix, U.S.A., pages
42574262, 1999.
5. Z.-P. Jiang, I.M.Y. Mareels, and Y. Wang. A Lyapunov formulation of the nonlinear small-
gain theorem for interconnected iss systems. Automatica, 32(8):12111215, 1996.
6. Z.P. Jiang, A.R. Teel, and L. Praly. Small-gain theorem for ISS systems and applications.
Math. Control Signals Systems, 7:95120, 1994.
7. I. Karafyllis and Z.P. Jiang. A small-gain theorem for a wide class of feedback systems with
control applications. SIAM Journal Control and Optimization, 46(4):14831517, 2007.
8. I. Karafyllis and C. Kravaris. From continuous-time design to sampled-data design of non-
linear observers. In CDC08 (IEEE Conference on Decision and Control), Cancun, Mexico,
pages 54085413, 2008.
9. I. Karafyllis and C. Kravaris. Global stability results for systems under sampled-data control.
submitted to International Journal of Robust and Nonlinear Control, 2008.
10. H.K. Khalil. Nonlinear Systems. Prentice-Hall, Englewood Cli?s, New Jersey, U.S.A., 3rd
edition, 2002.
1
Notice that actually we need a T
which corresponds to the positive solution of the equation
f( t) = 1. Expression for T
used in the theorem corresponds to the interval where f
(t) is
negative. This is done to give a simple expression for T
. However we can use (30) to get
numerically a better estimate for T
11. J. Kurzweil. On the inversion of Lyapunovs second theorem on the stability of motion. Amer-
ican Mathematic Society Translations, 24:1977, 1956.
12. F. Mazenc and L. Praly. Adding integrations, saturated controls, and stabilization for feed-
forward systems. IEEE Transactions on Automatic Control, 41(11):15591578, 1996.
13. S. Monaco and D. Normand-Cyrot. Issues on nonlinear digital systems. European Journal of
Control, 7:160178, 2001.
14. D. Nesic and A.R. Teel. A framework for stabilization of nonlinear sampled-data sys-
tems based on their approximate discrete-time. IEEE Transactions on Automatic Control,
49:11031122, 2004.
15. D. Nesic and A.R. Teel. Input-output stability properties of networked control systems. IEEE
Transactions on Automatic Control, 49:16501667, 2004.
16. D. Nesic, A.R. Teel, and D. Carnevale. Explicit computation of the sampling period in em-
ulation of controllers for nonlinear sampled-data systems. IEEE Transactions on Automatic
Control, page to appear, 2008.
17. D. Nesic, A.R. Teel, and E.D. Sontag. Formulas relating KL stability estimates of discrete-
time and sampled-data nonlinear systems. Systems & Control Letters, 38(1):49 60, 1999.
18. L. Praly and Y. Wang. Stabilization in spite of matched unmodelled dynamics and an
equivalent denition of input-to-state stability. Mathematics of Control, Signals and Systems,
9(1):133, 1996.
19. R.G. Sanfelice and A.R. Teel. Lyapunov analysis of sampled-and-hold hybrid feedbacks. In
CDC06 (IEEE Conference on Decision and Control) San Diego, U.S.A., pages 4879 4884,
2006.
20. E.D. Sontag. Further facts about input-to-state stabilization. IEEE Transactions on Automatic
Control, 35:473476, 1990.
21. B. Wu and Z. Ding. Semi-global asymptotic stability of a class of sampled-data systems
in output feedback form. In CDC08 (IEEE Conference on Decision and Control) Cancun,
Mexico, pages 54205425, 2008.
22. L. Gr¨une, E.D. Sontag, and F.R. Wirth. Asymptotic stability equals exponential stability, and
ISS equals finite energy gain -if you twist your eyes. Systems Control Lett., 38(2):127-134,
1999.
23. D. Angeli, E. D. Sontag.Forward completeness, unboundedness observability, and their Lya-
punov characterizations. Systems Control Lett., 38: 209-217, 1999.
Appendix
Proof of Lemma 1. Let µ > 0 be arbitrary and define an auxiliary function Φ(x) =
kxk. Similar to Lemma 11 in [18] this function will serve the basis to construct function
W which satisfies inequality (15). Taking derivative of Φ along the solutions of (10) we
obtain
Φ
x
(x) f(x, u) kf(x, u)k . (31)
Notice that function f satisfies the assumptions of Lemma 2 and therefore there exist
C
1
functions λ
i
, C
1
functions κ K and positive constants c
i
> 0, i = 1, 2 such that
λ
i
(s) = (κ
i
(s) + c
i
) s, (32)
and
kf(x, u)k λ
1
(kxk) + λ
2
(kuk). (33)
It follows then that
Φ
x
(x) f(x, u) λ
1
(|x|) + λ
2
(|u|) = λ
1
(Φ(x)) + λ
2
(kuk).
Next we define the function ρ as
(
ρ(τ) = exp
R
τ
1
a
λ
1
(s)
ds
for all τ R
>0
ρ(0) = 0
(34)
where a = max{µ, 2(c
1
+ κ
1
(1)}.
Claim 1. Thus defined function ρ is a continuous, locally Lipschitz function and
there exists a constant c > 0 such that ρ
(s) c for all s > 0.
We will prove this Claim a little bit later while for now we assume that it is true and
define function W as W = ρ Φ. Function W is locally Lipschitz (as a composition of
2 locally Lipschitz functions) and we have
W
x
(x)f(x, u) =
a
λ
1
(Φ(x))
W (x)
Φ
x
(x)f(x, u)
a
λ
1
(Φ(x))
W (x) (λ
1
(Φ(x)) + λ
2
(|u|))
µW(x) +
µW (x)
λ
1
(Φ(x))
λ
2
(|u|)
= µW(x) + µρ
(Φ(x))λ
2
(|u|) µW (x) + cµλ
2
(|u|). (35)
This would end the proof of the lemma.
Proof of the Claim. Function ρ defined in (34) is continuous on R
>0
and strictly in-
creasing. From (32) we have that for all s [0, 1]
c
i
s λ
1
(s) (c
1
+ κ
1
(1)) s (36)
and therefore
R
τ
1
a
λ
1
(s)
ds
R
τ
1
a
c
1
+κ
1
(1)
ds
s
. Since the last integral diverges to −∞ as τ
goes to zero we have that function ρ is continuous on R
0
and therefore ρ K.
In contrast with [18] we can not guarantee that thus constructed function ρ belongs
to K
. Actually, this function will belong to K
only under certain conditions.
Next we will prove that the function ρ is locally Lipschitz. Since it is a C
2
function
on R
>0
, it is enough for us to show that lim
τ 0
+
ρ
(τ) exists and is bounded.
2
For τ 6= 0 we have
ρ
(τ) =
a
λ
1
(τ)
ρ(τ), ρ
′′
(τ) =
a
2
λ
2
1
(τ)
1
(τ)
λ
2
1
(τ)
ρ(τ). (37)
From (32) it follows that λ
1
(0) = c
1
and λ
(τ) > 0 for all τ 0. Thus there exists a
constant δ > 0 such that for 0 < τ < δ we have λ
1
(τ) 2c
1
and we have that on the
2
In doing this we mostly retrace the steps of proof of Lemma 11, [18]
interval(0, δ) the function ρ
is positive and strictly increasing and hence lim
τ 0
+
ρ
(τ)
exists.
Next we show that this limit is bounded. From the first inequality in (37) we have
that on the interval (0, 1)
ρ
(τ) =
a
λ
1
(τ)
exp
Z
1
τ
a
λ
1
(s)
ds
a
c
1
τ
exp
Z
1
τ
a
c
1
+ κ
1
(1)
ds
s
=
a
c
1
τ
exp
a
c
1
+ κ
1
(1)
ln τ
=
a
c
1
+κ
1
(1)
c
1
τ
a
c
1
τ, (38)
where we used definition of the constant a in the last inequality. From (38) it follows
trivially that lim
τ 0
+
ρ
(τ) = 0 and therefore we proved that the function ρ is locally
Lipschitz.
3
To prove boundedness of ρ
on R
0
we are left only with the case τ 1 . From (32)
it follows that there exists τ
> 0 such that κ(τ
) + c
1
= a. Without loss of generality
we can assume that τ
> 1. Using lower estimate λ
1
(τ) c
1
τ and (37) we obtain that
for all τ [1, τ
] the following holds
ρ
(τ)
a
c
1
τ
exp
Z
τ
1
a
c
1
s
ds
=
a
c
1
τ
exp
ln τ
a
c
1
=
a
c
1
τ
a
c
1
1
∆,
where =
a
c
1
τ
a
c
1
1
Finally for all τ τ
the following holds
ρ
(τ)
a
c
1
τ
exp
Z
τ
1
a
c
1
s
ds +
Z
τ
τ
a
as
ds
=
a∆
c
1
τ
exp
Z
τ
τ
ds
s
a∆
c
1
τ
exp
ln
τ
τ
a
c
1
τ
a
c
1
2
,
and therefore the function ρ
is bounded on R
>0
.
Lemma 2. Let n, m, l N 0 and F : R
n+m
R
l
) be continuous function then
the following statements are correct for all (x, y) R
n+m
A1. There exists a function α K and a continuouslu differentiable, strictly increasing
function c : R
0
[1 , +) such that the following inequality holds
|F (x, y) F (x, 0)| c(|x|)d(|y|). (39)
A2. If in addition, function F is locally Lipschitz and F (0, 0) = 0, then there exist
continuously differentiable functions γ
i
K and nonnegative constants c
i
0
(i = 1, 2) such that
|F (x, y)| λ
1
(|x|) + λ
2
(|y|), (40)
where λ
i
(s) = [c
i
+ γ
i
(s)] s, i = 1, 2.
3
Actually, following reasoning of Lemma 11 of [18] and slightly increasing the constant a we
can ensure that ρ is a C
1
function. However, since function Φ is only locally Lipschitz, in
general we can not expect to find a C
1
function W .
Proof.
A1. From Lemma A.1, [12] we have that there exist functions γ
0
, γ
1
K
, γ
1
C
1
such that, for all (x, y) R
n+m
,
|F (x, y) F (x, 0)| γ
0
(2|y|)
1 + γ
1
(|x|
2
+ |y|
2
)
.
Using propertiesof class K
functionsand denoting c(s) =
1 + γ
1
(2s
2
)
, d(s) =
γ
0
(s)
1 + γ
1
(2s
2
)
we obtain
|F (x, y) F (x, 0)| γ
0
(|y|)
1 + γ
1
(2|x|
2
) + γ
1
(2|y|
2
)
γ
0
(|y|)
1 + γ
1
(2|x|
2
)
+ γ
0
(|y|)γ
1
(2|y|
2
)
γ
0
(|y|) + γ
0
(|y|)γ
1
(2|y|
2
)
1 + γ
1
(2|x|
2
)
= c(kxk)d(kyk).
Continuous differentiability of the function c and other properties follow straight-
forward from the definitions of the functions c and d and the fact that γ
1
C
1
.
A2. Define z R
n+m
as z = (x
, y
)
and let
˜
F (z) = F (x, y). Since function
˜
F is
locally Lipschitz in z, hence there exists a continuous function L : R
n+m
R
0
such that
˜
F (z)
L(z) kzk. Based on L(x) we define function l
0
: R
+
R
+
as follows l(s) = sup
{z:kzk≤s}
L(z) and l
0
(0) = L(0). Since L(z) is continuous,
the function l
0
(s) is well defined, continuous at s = 0 and nondecreasing. It is
easy to show that we can always upperbound function l
0
by a strictly increasing
continuously differentiable function, i.e there always exists a C
1
function l
1
K
and a constant c
1
0 such that l
0
(s) l
1
(s) + c
1
for all s 0.
Notice that kzk kxk+kyk and l
l
(s
1
)s
2
l
1
(s
1
)s
1
+l
1
(s
2
)s
2
for any s
1
, s
2
0;
the last one is due to the fact that l
1
K. Using this inequalities we obtain that for
all s 0
kF (x, y)k =
˜
F (z)
( l
1
(kzk) + c
1
) kzk (l
1
(kxk + kyk) + c
1
) (kxk + kyk)
( l
1
(2 kxk) + l
1
(2 kyk) + c
1
) (kxk + kyk)
( 3l
1
(2 kxk) + c
1
) kxk + (3l
1
(2 kyk) + c
1
) kyk