5) Connect the process of negotiation whit the one of
the control MPC, this because of compatibilities
found between them.
Model Predictive Control (MPC), also known as
receding horizon control, is a control technique
widely used in industry (see (Qin & Badwell, 2003)
and (Camacho & Bordons, 2004)) well suited for the
control of continuous LSS. In MPC, the control
input is obtained by solving a discrete-time optimal
control problem over a given horizon, producing an
optimal open-loop control input sequence. The first
control in that sequence is applied. At the next
sampling instant, a new optimal control problem is
formulated and solved based on the new
measurements.
The MPC control aim is to find actions
u
k
, …, u
k+Nc
, such that after N
p
steps, a sequence of
predicted outputs y approaches a set point y
*
.
The theory of MPC is well developed; most
aspects, such as stability, nonlinearity, and
robustness, have been discussed in the literature
(see, e.g., (Bemporad & Morari, 1999) (Morari &
Lee, 1999). Besides, MPC is very popular in the
process control industry because the actual control
objectives and operating constraints can be
represented explicitly in the optimization problem
that is solved at each control instant.
Typically, MPC is implemented in a centralized
fashion. The complete system is modelled, and all
the control inputs are computed in one optimization
problem.
The goal of the research described in this paper is
to exploit the attractive features of MPC (meaningful
objective functions and constraints) in a distributed
implementation combining learning techniques to
perform the negotiation of these variables in a
cooperative Multi Agent environment and over a
Multi Agent platform. All these ideas are the basis
of the proposed architecture. A methodology for the
application of the proposed architecture is also
provided.
Organization of the paper is as follows: Section
2 introduces the problem to be solved and Section 3
describes known approaches to it based on
distributed MPC. The formalization of the proposed
architecture is given in Section 4. Section 5 and 6
are devoted to an application example and the results
obtained. Finally, Section 7 presents the paper
conclusions and Section 8 presents the current and
further research that is being developed.
2 THE PROBLEM
In order to control a LSS in a distributed way, some
assumptions have to be made on its dynamics, i.e. on
the way the system behaves. Assume, first, that the
system can be divided into n subsystems, where each
subsystem consists of a set of nodes and the
interconnections between them. The problem of
determining the partitions of the network is not
addressed in this paper; instead the reader is referred
to (Siljack, 1991). The set of partitions should be
complete. This means that all system states and
control variables should be included at least in one
of the partitions.
Definition 1. System Partitions. P is the set of
system partitions and is defined by
},,,{
21 np
pppP …
(1)
where each system partition (subsystem) p
i
is
described by a deterministic linear time-invariant
model that is expressed in discrete-time as follows
)()()()(
)()()()1(
,,
,,
kkkk
kkkk
iidiiuiii
iidiiuiii
dDuDxCy
dBuBxAx
++=
+
(2)
where variables x, y, u and d are the state, output,
input and disturbance vectors, respectively; A, C, B
and D are the state, output, input and direct matrix,
respectively. Subindices u and d refer to the type of
inputs the matrix model, either control inputs or
disturbances.
Definition 2. Internal Variables. Internal variables
are control variables that appear in the model of only
one subsystem in the problem. The set of internal
variables of one partition is defined by
},,,{
21 nu
uuuU …
(3)
Definition 3. Shared Variables. Shared variables
are control variables that appear in the model of at
least two subsystems in the problem. Their values
should be consistent in the subsystems they appear,
so they are also called negotiated variables. V is the
set of negotiated variables defined by
},,,{
21 nv
vvvV …=
(4)
Each subsystem i is controlled by an MPC
controller using:
- the model of the dynamics of subsystem i
given by equation (2);
- the measured state x
i
(k) of subsystem i;
A MULTI-AGENT MPC ARCHITECTURE FOR DISTRIBUTED LARGE SCALE SYSTEMS
545