Computation of Neural Networks Lyapunov Functions for Discrete and
Continuous Time Systems with Domain of Attraction Maximization
Benjamin Bocquillon
1
, Philippe Feyel
1
, Guillaume Sandou
2
and Pedro Rodriguez-Ayerbe
2
1
Safran Electronics & Defense, 100 avenue de Paris, Massy, France
2
Universit
´
e Paris-Saclay, CentraleSup
´
elec, CNRS, L2S, 3 rue Joliot Curie, 91192 Gif-Sur-Yvette, France
Keywords:
Lyapunov Function, Domain of Attraction, Optimization, Neural Network, Nonlinear System.
Abstract:
This contribution deals with a new approach for computing Lyapunov functions represented by neural net-
works for nonlinear discrete-time systems to prove asymptotic stability. Based on the Lyapunov theory and
the notion of domain of attraction, the proposed approach deals with an optimization method for determin-
ing a Lyapunov function modeled by a neural network while maximizing the domain of attraction. Several
simulation examples are presented to illustrate the potential of the proposed method.
1 INTRODUCTION
Lyapunov theory, introduced in the late nineteenth
century (Lyapunov, 1892), is a classical way to in-
vestigate for the stability of an equilibrium point for
a dynamical system. The method relies on the search
for a function that exhibits three important properties
that are sufficient for establishing the Domain Of At-
traction (DOA) of a stable equilibrium point : (1) it
must be a local positive definite function; (2) it must
have continuous partial derivatives, and (3) its time
derivative along any state trajectory must be negative
semi-definite. Although efficient to prove stability
once the so-called Lyapunov function is known, there
is no general method for constructing such a function.
The Lyapunov function construction is still an
open problem, but several methods, often based on
optimization, have emerged in the literature. One can
cite (Panikhom and Sujitjorn, 2012), where the best
quadratic Lyapunov function is looked for. However,
these methods are too conservative in case of indus-
trial complex systems. The work of (Arg
´
aez et al.,
2018) proposes a new iterative algorithm that aims
to avoid obtaining trivial solutions when construct-
ing completive Lyapunov functions. This algorithm
is based on mesh-free numerical approximation and
analyses the failure of convergence in certain areas
to determine the chain-recurrent set. Once again, the
method appears too difficult for being used in an in-
dustrial context where flexibility is needed. Finally,
the survey (Giesl and Hafstein, 2015) has brought dif-
ferent methods and gave a wide overview of the meth-
ods that can be used for the Lyapunov function com-
putation. It proposes conservative methods when the
system is complex and highly non-linear.
However, to the authors’ mind, Artificial Intelli-
gence, Machine Learning and Neural Network bring
a great opportunity to design powerful tools to jus-
tify and quickly certificate complex industrial systems
such as in the aerospace field for instance. One of the
first papers using Artificial Intelligence to compute
Lyapunov function is (Prokhorov, 1994), where a so
called Lyapunov Machine, which is a special-design
artificial neural network, is described for Lyapunov
function approximation. The author indicates that
the proposed algorithm, the Lyapunov Machine, has
substantial computational complexity among other is-
sues to be resolved and defers their resolution to fu-
ture work. The work of (Banks, 2002) suggests a
Genetic Programming for computing Lyapunov func-
tions. However, the computed Lyapunov functions
may have locally a conservative behavior. In this
study, the use of Neural Networks allows to overcome
these limits. Neural Networks are known to be pow-
erful regressors that can approximate any nonlinear
function. As a result, they appear as a good candi-
date for the construction of a Lyapunov function. In
the literature, one can find other works using neu-
ral network to construct or approximate a Lyapunov
function (Serpen, 2005) and the paper (Petridis and
Petridis, 2006) where the authors propose an interest-
ing and promising approach for the construction of
Lyapunov functions represented by neural networks.
In (Bocquillon et al., 2020), the authors propose to
use a new constrained optimization scheme such that
the weights of this neural network are calculated in a
way that is mathematically proven to result in a Lya-
punov function while maximizing the DOA.
Bocquillon, B., Feyel, P., Sandou, G. and Rodriguez-Ayerbe, P.
Computation of Neural Networks Lyapunov Functions for Discrete and Continuous Time Systems with Domain of Attraction Maximization.
DOI: 10.5220/0010176504710478
In Proceedings of the 12th International Joint Conference on Computational Intelligence (IJCCI 2020), pages 471-478
ISBN: 978-989-758-475-6
Copyright
c
2020 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
471