EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF
MATRICES B AND D IN SUBSPACE IDENTIFICATION METHODS
Catarina J. M. Delgado
Faculdade de Economia do Porto
Rua Dr. Roberto Frias, 4200-464 Porto - Portugal
P. Lopes dos Santos
Faculdade de Engenharia da Universidade do Porto
Rua Dr. Roberto Frias, 4200-465 Porto - Portugal
Keywords:
System Identification, Subspace methods, State-space models, Linear Systems, Parameter estimation.
Abstract:
In this paper we provide a different way to estimate matrices B and D, in subspace identification algorithms.
The starting point was the method proposed by Van Overschee and De Moor (1996) the only one applying
subspace ideas to the estimation of those matrices. We have derived new (and simpler) expressions and we
found that the method proposed by Van Overschee and De Moor (1996) can be rewritten as a weighted least
squares problem, involving the future outputs and inputs.
1 INTRODUCTION
In subspace identification methods, there are two
main steps: in the first step, a basis for the column
space of a certain matrix, the extended observability
matrix, is determined from the input-output data. The
dimension of this subspace is equal to n, the order of
the system to be identified. If we know the extended
observability matrix, then we can estimate (explicitly
or implicitly) the state sequence.
In the second main step of these algorithms, the
system matrices are estimated. Several strategies ex-
ist, in order to estimate A and C and B and D, but
we will focus our attention in the one proposed by
Van Overschee and De Moor (van Overschee and
de Moor, 1996), for the algorithm R-MOESP (Ro-
bust MOESP). We show in this paper that, for the es-
timation of B and D matrices, the R
MOESP method
can be simplified, thus allowing a significant improve-
ment on the numerical efficiency of the estimation
procedure, without any loss of accuracy.
On the other hand, we manage to relate the R-
MOESP algorithm to a different (geometric) approach
(?), thus proving that these two different approaches
are not that different which can be seen as an ex-
tension of .the unifying theorem, for the estimation of
B and D matrices step, in Subspace Identification Al-
gorithms. This kind of relation has already been sug-
gested for the matrices A and C (Chiuso and Picci,
2001) but has never been proposed for the estimation
of matrices B and D, since the two approaches appear
to be very different.
In this paper, we will focus our attention to the
problem of estimating matrices B and D, knowing the
extended observability matrix and matrices A and C.
Therefore, the paper is organized as follows: in sec-
tion 2, we introduce the subspace identification prob-
lem, notation, main concepts behind subspace meth-
ods and we describe the technique proposed by Van
Overschee and De Moor (van Overschee and de Moor,
1996) for the estimation of B and D. In section 3, we
provide new expressions for the estimation of the in-
put matrices and in section 4 we show that the tech-
nique presented by Van Overschee and De Moor (van
Overschee and de Moor, 1996) is merely a projection
on the null space a certain matrix. Finally, in section
5, some simulation results are introduced and, in sec-
ton 6, the conclusions are presented.
2 BACKGROUND
2.1 Subspace Identification Problem
Subspace Identification algorithms aim to estimate,
from measured input / output data sequences ({u
k
}
and {y
k
}, respectively), the system described by:
½
x
k+1
= Ax
k
+ Bu
k
+ Ke
k
y
k
= Cx
k
+ Du
k
+ e
k
(1)
E
£
e
p
e
T
q
¤
= R
e
δ
pq
> 0 (2)
168
Delgado C. and dos Santos P. (2004).
EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF MATRICES B AND D IN SUBSPACE IDENTIFICATION METHODS.
In Proceedings of the First International Conference on Informatics in Control, Automation and Robotics, pages 168-175
DOI: 10.5220/0001146701680175
Copyright
c
SciTePress
where A R
n×n
, B R
n×m
, C R
l×n
, D
R
l×m
, K R
n×l
and x
k
R
n
. The sequence
{e
k
} R
l
is a white noise stochastic process and the
input data sequence is assumed to be a persistently ex-
citing quasi-stationary deterministic sequence(Ljung,
1987)
2.2 Block Hankel Matrices
The notation used will be based on the notation of Van
Overschee and De Moor(van Overschee and de Moor,
1996): U and Y are two block Hankel matrices built
with 2i row-blocks and j column-blocks (for N , the
number of measurements, greater or equal than 2i +
j 1:
U =
U
(1)
...
U
(2i)
R
2mi×j
, Y =
Y
(1)
...
Y
(2i)
R
2li×j
where U (k) and Y (k) are the k th row-blocks of,
respectively, U and Y . Matrix U can be partitioned
as:
U
p
=
U
(1)
...
U
(i)
, U
f
=
U
(i+1)
...
U
(2i)
U
p+
=
U
(1)
...
U
(i+1)
, U
f
=
U
(i+2)
...
U
(2i)
where the subscripts p and f denote ”past” and ”fu-
ture”, respectively.
The same happens to matrix Y and to the in-
put/output data matrices: H =
£
U
Y
¤
, H
p
=
h
U
p
Y
p
i
,
H
p+
=
h
U
p+
Y
p+
i
, HU =
h
H
p
U
f
i
and H
+
U
=
h
H
p
+
U
f
i
.
When the input-output data is organized into matri-
ces with this special block Hankel structure, then (1)
can be written as
Y
f
= Γ
i
X
i
+ H
d
i
U
f
+ E
f
(3)
Y
f
= Γ
i1
X
i+1
+ H
d
i1
U
f
+ E
f
(4)
and also as
·
b
X
i+1
Y
(i)
¸
=
·
A B
C D
¸·
b
X
i
U
(i)
¸
+
·
ρ
w
ρ
v
¸
(5)
where:
1. Matrix
b
X
i
is the state sequence generated by a bank
of Kalman filters, working in parallel on each of the
columns of the block Hankel matrix of past inputs
and outputs:
b
X
i
=
£
bx
i+1|1
. . . bx
Ni+1|N 2i+1
¤
b
X
i+1
=
£
bx
i+2|2
. . . bx
Ni+2|N 2i+2
¤
2. Γ
i
R
li×n
is the extended observability matrix
(since i> n), where the subscript i denotes the num-
ber of row-blocks
3. H
d
i
R
li×mi
is a block Toeplitz matrix, built with
Markov parameters
4. ρ
wk
= bx
k+1
b
Abx
k
b
Bu
k
, ρ
vk
= y
k
b
Cbx
k
b
Du
k
and
·
ρ
w
ρ
v
¸
=
·
ρ
wi
... ρ
wi+j1
ρ
vi
... ρ
vi+j1
¸
2.3 The projection theorem
The main idea behind the subspace theory is stated
in the ”projection theorem” (Van Overschee and de
Moor, 1996): given (3) and (4) then, under certain
conditions, there is a connection between an esti-
mated kalman filter state sequence and the orthogo-
nal projection of the row space of Y
f
(future outputs)
into the row space of the past inputs, past outputs and
future inputs row space U
f
:
Z
i
= Y
f
/
HU
= Γ
i
b
X
i
+ H
d
i
U
f
(6)
Z
i+1
= Y
f
/
H
+
U
= Γ
i1
b
X
i+1
+ H
d
i1
U
f
(7)
where A/B denotes an orthogonal projection of the
row space of A into the row space of B and the state
sequences are given by
b
X
i
= [
θ
1
θ
2
θ
3
]
b
X
0
U
p
Y
p
(8)
b
X
i+1
= [
α
1
α
2
α
3
]
b
X
0
U
p+
Y
p+
(9)
where θ
1
, θ
2
, θ
3
, α
1
, α
2
, α
3
are functions of the sys-
tem matrices, and
b
X
0
a function of U. In fact, as the
inputs are possibly correlated, one can obtain infor-
mation about
b
X
0
from the inputs U, by projecting the
initial state sequence X
0
(exact but unknown) into U:
b
X
0
= X
0
/U. A similar relation has been establish be-
tween a second estimated state sequence
e
X
i
and the
oblique projection of the row space of Y
f
(future out-
puts), along the future inputs row space U
f
, into the
row space of the past inputs and outputs H
p
:
O
i
= Y
f
/
U
f
H
p
= Γ
i
e
X
i
(10)
O
i+1
= Y
f
/
U
f
H
p
+
= Γ
i1
e
X
i+1
(11)
There is a slight difference between Z
i
and O
i
. In
fact, O
i
can be computed from Z
i
by just ignoring
the information given by U
f
. The consequences are
clear: part of the information required to estimate
b
X
0
EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF MATRICES B AND D IN SUBSPACE
IDENTIFICATION METHODS
169
is no longer available so, the estimated state sequence
³
e
X
i
´
is different from
b
X
i
. Although
e
X
i
and
b
X
i
are
not the same estimates, they are still very similar and,
actually, under some special conditions (i or
{u
k
} is white noise or the system is purely determin-
istic) they are the same. This approximation of the
state sequences is used to obtain the more elegant and
simple algorithm presented in next section. Unlike the
algorithm that considers the “exact” Kalman state es-
timates by implementing some orthogonal projections
(unbiased for j ), this approximate algorithm is
biased for finite i, except under certain special cases
(Van Overschee and De Moor, 1996)
2.4 Subspace Identification
Algorithms
There are two main steps in subspace identification
algorithms:
1. determine the model order n and estimate the ex-
tended observability matrix through the singular
value decomposition of a weighted oblique projec-
tion, W
L
O
i
W
R
, and
2. solve a least squares problem, in order to obtain the
state space matrices:
min
°
°
°
°
·
b
X
i+1
Y
(i)
¸
·
A B
C D
¸·
b
X
i
U
(i)
¸
°
°
°
°
2
F
(12)
where kΞk
F
denotes the Frobenius norm of matrix
Ξ.
In the first step, since Van Overschee and De Moor
(van Overschee and de Moor, 1996) proved that (10)
is a valid approximation, then one can estimate the ob-
servability matrix from the singular value decomposi-
tion of O
i
. Since rank
i
) = n (we assume {A, C}
observable), then rank(O
i
) should also be n, and
O
i
= USV
T
= (13)
= [
U
n
U
n
]
·
S
n
0
0 0
¸·
V
T
n
V
T
n
¸
=
= U
n
S
n
V
T
n
where O
i
R
li×j
, S R
li×j
(diagonal matrix with
the singular values of O
i
) , U R
li×li
, V R
j×j
(U and V are orthonormal matrices), U
n
R
li×n
,
S
n
R
n×n
, V
T
n
R
n×j
, rank(O
i
) = n. The or-
der n of the system should, therefore, be determine
by the number of the nonzero singular values of O
i
,
dim(S
n
) (van Overschee and de Moor, 1996). How-
ever, in many practical situations, when the measure-
ments are noise corrupted, it can not be straightfor-
ward to distinguish the ”nonzero” from the ”zero” sin-
gular values – one must then make a decision by com-
paring the values or by assuming different orders and
comparing simulation errors.
As the column spaces of Γ
i
and U
n
S
1/2
n
are the
same, we compute
Γ
i
= U
n
S
1/2
n
(14)
and then Γ
i
=Γ
i1
, by removing the last l rows from
Γ
i
.
In CV A and MOESP approaches, W
L
and W
R
are given by
W
L
W
R
CV A
³
Y
f
Π
U
f
Y
T
f
´
1/2
Π
U
f
MOESP I
li
Π
U
f
where Π
U
f
=
³
I
j
U
+
f
U
f
´
.
Knowing an estimate of Γ
i
³
Γ
i
= U
n
S
1/2
n
´
, ma-
trices A and C are obtained by solving a linear equa-
tion, in a least squares sense:
·
Γ
+
i1
Z
i+1
Y
(i)
¸
=
·
A
C
K
BD
¸·
Γ
+
i
Z
i
U
f
¸
+ ρ (15)
Lopes dos Santos and Martins de Carvalho (dos San-
tos and de Carvalho, 2004) have shown that these es-
timates and the estimates of A and C produced by the
shift-invariant property of Γ
i
are the same. The ma-
trix K
BD
is then used to estimate B and D, since
K
BD
=
·
K
A
K
C
¸
= [
K
1
... K
i
] = (16)
=
·
B Γ
+
i1
H
d
i1
D 0
¸
·
A
C
¸
Γ
+
i
H
d
i
=
=
·
N
1
·
D
B
¸
... N
i
·
D
B
¸ ¸
Equation (16) can be rewritten as:
"
K
1
...
K
i
#
=
"
N
1
...
N
i
#
·
D
B
¸
(17)
and B and D estimated in the least squares
sense. Lopes dos Santos and Martins de Carvalho
(dos Santos and de Carvalho, 2003) have shown
that K
A
can be written as K
A
= K
p
K
C
=
µ
A
³
Γ
T
i
Γ
i
´
1
C
T
K
C
. Therefore, we can work
only with
K
C(B,D)
=
·
K
p
I
l
¸
+
K
BD
= (18)
=
·
N
C1
·
D
B
¸
... N
Ci
·
D
B
¸ ¸
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
170
where
N
C1
= [
I
l
L
C1
L
C2
... L
Ci
] G
i
N
Ck
= [
L
Ck
... L
Ci
0
] G
i
(k > 1)
L
A
= [
L
A1
L
A2
... LA
] = AΓ
+
i
L
C
= [
L
C1
L
C2
... L
Ci
] = CΓ
+
i
G
i
=
·
I
l
0
0 Γ
i1
¸
Equation (18) can be rewritten as:
"
K
C1
...
K
Ci
#
=
"
N
C1
...
N
Ci
#
·
D
B
¸
(19)
and B and D estimated in the least squares sense.
When matrix
³
U
f
U
T
f
´
is almost a singular matrix,
(17) provides bad results. It is better then to avoid
the explicit estimation of K
BD
and obtain B and D
directly from:
P =
µ·
Γ
+
i1
Z
i+1
Y
(i)
¸
·
A
C
¸
Γ
+
i
Z
i
(20)
= [
K
1
K
2
... K
i
] U
f
=
=
i
X
k=1
K
k
U
k
=
i
X
K=1
N
k
·
D
B
¸
U
k
where U
k
R
m×j
is the k-th block row of U
f
.
In order to determine D and B, one has to apply the
vector operation, vec(.), and the Kronecker product,
, to (20). The new equation can now be solve in the
least squares sense:
vec(P ) =
Ã
i
X
k=1
¡
U
T
k
N
k
¢
!
vec
µ·
D
B
¸¶
(21)
3 ESTIMATION OF MATRICES B
AND D
In this section, we will analyze both sides of (20).
First, we will consider matrix P and the relation be-
tween the orthogonal projections Z
i
and Z
i+1
, and
then provide a new expression for P. Then, we will
consider matrix K
BD
, also providing a new expres-
sion for this matrix. Finally, we will relate P and
K
BD
.
3.1 The orthogonal projections
Theorem 1 Given matrices A R
n×m
, B
R
b×m
, C R
c×m
then
A/
"
B
C
#
A/
C
= A/
(
B/
C
)
= (22)
= A/
C
BΠ
C
(23)
Proof. We can define the first orthogonal projection
as
A/
BC
= A
£
B
T
C
T
¤
·
BB
T
BC
T
CB
T
CC
T
¸
1
·
B
C
¸
=
= A
£
B
T
C
T
¤
·
1
γ
1
γ
2
ϕ
1
¸·
B
C
¸
=
= [θ
B
θ
C
]
·
B
C
¸
where, by the Matrix Inversion Lemma (Kailath,
1980),
1
=
¡
BΠ
C
B
T
¢
1
ϕ
1
=
¡
CC
T
¢
1
+
¡
CC
T
¢
1
CB
T
×
¡
BΠ
C
B
T
¢
1
BC
T
¡
CC
T
¢
1
γ
1
=
¡
BΠ
C
B
T
¢
1
BC
T
¡
CC
T
¢
1
γ
2
=
¡
CC
T
¢
1
CB
T
¡
BΠ
C
B
T
¢
1
On the other hand,
A/
BC
= A/
B
C + A/
C
B (24)
where
A/
B
C = θ
C
C = A
¡
B
T
γ
1
+ C
T
ϕ
1
¢
C (25)
and, after some manipulation,
A/
B
C = (A A/
C
B) Π
C
(26)
Therefore,
A/
BC
= A/
C
+ A/
C
B (I Π
C
) (27)
Corollary 2 Given Z
i
= Y
f
/
HU
and Z
i+1
=
Y
f
/
H
+
U
, then
Z
i
+1
= Z
i
+ 4Z
i
= Z
i
+ Y
f
/
(
Y
(i)
/
HU
)
(28)
Proof. Since the rowspace of H
+
U
is spanned by
the rows of
·
HU
Y (i)
¸
, we can explore (27), with A =
Y
f
, B = HU, C = Y
(i)
.
Another way to prove this is through the LQ de-
composition of H
+
U
:
·
UH
Y
(i)
¸
=
·
L
UH
0 0
L
Y 1
L
Y 2
0
¸
"
Q
LUH
Q
LY
Q
L
#
EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF MATRICES B AND D IN SUBSPACE
IDENTIFICATION METHODS
171
where
Y
f
.Q
T
L
=
·
Y
(i)
Y
f
¸
Q
T
L
=
= [B
UH
B
Y
B
UHY p
]
=
·
b
UH
b
Y
b
UHY p
B
mUH
B
mY
B
mUHY p
¸
In fact,
Z
i
=
·
b
UH
Q
LUH
B
mUH
Q
LUH
¸
(29)
Z
i
+1
=
·
b
UH
b
Y
B
mUH
B
mY
¸·
Q
LUH
Q
LY
¸
(30)
4Z
i
=
·
b
Y
Q
LY
B
mY
Q
LY
¸
= (31)
= Y
f
/
(
Y
(i)
/
UH
)
=
Z
i
+1
Z
i
3.2 Simplifying matrix P
Theorem 3 Given (1), where {A,C} is observable
and A is non-singular, then
P =
·
Γ
+
i1
Z
i+1
Y
(i)
¸
·
A
C
¸
Γ
+
i
Z
i
(32)
can be rewritten as:
P = M Z
i
+ N 4Z
i
(33)
where
Zi = Y
f
/
UH
= Y
f
/
"
H
P
U
f
#
(34)
4Z
i
= Y
f
/
b
Y
(i)
= Y
f
/
(
Y
(i)
/
UH
)
(35)
and
N =
·
0 Γ
+
i1
I
l
0
¸
(36)
M =
·
AP
i
C
T
(P
i1
AP
i
A
T
T
i1
I
l
CP
i
C
T
CP
i
A
T
Γ
T
i1
¸
(37)
Proof. Since we assume {A, C} to be observable, Γ
i
and Γ
i1
are full column rank matrices and we can
replace their pseudo-inverse expressions with
Γ
+
ii
=
¡
Γ
T
i1
Γ
i1
¢
1
Γ
T
i1
= P
i1
Γ
T
i1
(38)
Γ
+
i
=
¡
Γ
T
i
Γ
i
¢
1
Γ
T
i
= P
i
Γ
T
i
(39)
On the other hand, if A is a non-singular matrix, then,
by the shift-invariance property of Γ
i
,
A = Γ
+
i1
Γ
i
(40)
with Γ
i
= [
0 I
l
] Γ
i
, and
P
i1
= A
P
i
A
T
(41)
Therefore,
P =
·
P
A
P
C
¸
= (42)
=
·
P
i1
Γ
T
i1
Z
i+1
AP
i
Γ
T
i
Z
i
Y
(i)
CP
i
Γ
T
i
Z
i
¸
=
=
·
0 P
i1
Γ
T
i1
I
l
0
¸
Z
i+1
·
A
C
¸
P
i
Γ
T
i
Z
i
P
C
= [I
l
0]
Z
i+1
CP
i
Γ
T
i
Z
i
= (43)
= [I
l
0]
Z
i+1
£
CP
i
C
T
CP
i
A
T
Γ
T
i1
¤
Z
i
=
£
I
l
CP
i
C
T
CP
i
A
T
Γ
T
i1
¤
Z
i
+
+ [I
l
0] 4Z
i
where [I
l
0] 4Z
i
= 4ZY
i
= Y
(i)
/
UH
, and
P
A
=
£
0 P
i1
Γ
T
i1
¤
Z
i+1
AP
i
Γ
T
i
Z
i
= (44)
=
£
AP
i
C
T
(P
i1
AP
i
A
T
i1
¤
Z +
+
£
0 Γ
+
i1
¤
4Z
i
Then,
P =
·
AP
i
C
T
(P
i1
AP
i
A
T
T
i1
I
l
CP
i
C
T
CP
i
A
T
Γ
T
i1
¸
Z
i
+
·
0 Γ
+
i1
I
l
0
¸
4Z
i
3.3 Simplifying K
BD
Theorem 4 Given matrix K
BD
, defined in (16), then
K
BD
= MH
d
i
(45)
where M was introduced in (37).
Proof. As mentioned before,
K
BD
=
·
B Γ
+
i1
H
d
i1
D 0
¸
·
A
C
¸
Γ
+
i
H
d
i
(46)
Since H
d
i
can be given by
H
d
i
=
·
D 0
Γ
i1
B H
d
i1
¸
=
·
ϕ
i
0
H
d
i1
¸
the expression (46) can be expressed as a function of
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
172
ϕ
i
and H
d
i1
:
K
BD
=
·
K
11
K
12
K
21
K
22
¸
=
=
·
Ma M b
Mc M d
¸·
D 0
Γ
i1
B H
d
i1
¸
where
K
11
= B AP
i
Γ
T
i
ϕ
i
K
12
=
³
P
i1
Γ
T
i1
AP
i
Γ
T
i
´
H
d
i1
K
21
= D CP
i
Γ
T
i
ϕ
i
K
22
= CP
i
Γ
T
i
H
d
i1
M
a
= AP
i
C
T
M
b
= P
i1
Γ
T
i1
AP
i
Γ
T
i
= (P
i1
AP
i
A
T
T
i1
M
c
= I
l
CP
i
C
T
M
d
= CP
i
Γ
T
i
= CP
i
A
T
Γ
T
i1
A different way to prove this result can be found in
(Delgado et al., 2004).
3.4 The estimation of B and D
Theorem 5 The equation (20) can be written as
MY
f
= MH
d
i
U
f
(47)
where
M =
·
A(
P
i
P
i
)
CP
i
¸
¡
Γ
T
i
+ (48)
£
C
T
(CP
i
C
T
)
1
0
¤¢
=
·
AP
i
C
T
(P
i1
AP
i
A
T
T
i1
I
l
CP
i
C
T
CP
i
A
T
Γ
T
i1
¸
Proof. We start by assuming that M can be written as:
M =
·
A
¡
P
i
P
i
¢
CP
i
¸
[
N
1
N
2
] (49)
and then will prove that
N
1
= C
T
¡
I
l
(CP
i
C
T
)
1
¢
(50)
N
2
= A
T
Γ
T
i1
In fact, when A is a non-singular matrix,
P
i1
AP
i
A
T
= A
¡
P
i
P
i
¢
A
T
, (51)
and, therefore,
·
A
¡
P
i
P
i
¢
CP
i
¸
N
2
=
·
(P
i1
AP
i
A
T
T
i1
CP
i
A
T
Γ
T
i1
¸
On the other hand, since
CP
i
N
1
= CP
i
C
T
¡
I
l
(CP
i
C
T
)
1
¢
=
= CP
i
C
T
+ I
l
and
(AP
i
C
T
) = (A
P
i
C
T
)(I
l
CP
i
C
T
)
A
¡
P
i
P
i
¢
= (AP
i
C
T
)(CP
i
)
we obtain
A
¡
P
i
P
i
¢
N
1
= (A
P
i
C
T
) (CP
i
) N
1
=
= (AP
i
C
T
)(I
l
CP
i
C
T
) =
= AP
i
C
T
If we consider all the previous results, we can see
that knowing estimates of matrices A, C and Γ
T
i
al-
lows the estimation of H
d
i
, in the least squares sense,
and therefore, the estimation of B and D.
4 ORTHOGONAL PROJECTION
INTO THE NULLSPACE OF Γ
T
i
An analysis of matrix M shows us that,
M = ΥCP
i
¡
+ Γ
T
i
¢
(52)
where
Υ =
·
AP
i
C
T
I
l
¸
(53)
=
£
C
T
(CP
i
C
T
)
1
0
¤
(54)
is such that
MY
f
= M
i
X
f
+ H
d
i
U
f
+ E
f
) = (55)
= MH
d
i
U
f
+ ME
f
(56)
which means that
rowspace(M)colspace
i
) (57)
In fact,
M.Γ
i
= ΥCP
i
¡
+ Γ
T
i
¢
Γ
i
= (58)
= Υ
¡
CP
i
ΩΓi + CP
i
Γ
T
i
Γ
i
¢
=
= Υ
¡
CP
i
C
T
(CP
i
C
T
)
1
C + C.I
n
¢
=
= Υ(C + C) = 0
This means that the rows of
CP
i
¡£
C
T
(CP
i
C
T
)
1
0
¤
+ Γ
T
i
¢
are orthogonal to the columns of Γ
i
¥
EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF MATRICES B AND D IN SUBSPACE
IDENTIFICATION METHODS
173
5 SIMULATION RESULTS
It was considered the following system with two in-
puts and two outputs, represented in the forward in-
novation model:
½
x
k+1
= Ax
k
+ Bu
k
+ Ke
k
y
k
= Cx
k
+ Du
k
+ e
k
(59)
E
£
e
p
e
T
q
¤
= R
e
δ
pq
> 0 (60)
where:
A =
0.603 0.603 0 0
0.603 0.603 0 0
0 0 0.603 0.603
0 0 0.603 0.603
B =
1.1650 0.6965
0.6268 1.6961
0.0751 0.0591
0.3516 1.7971
C =
·
0.2641 1.4462 1.2460 0.5774
0.8717 0.7012 0.6390 0.3600
¸
D =
·
0.1356 1.2704
1.3493 0.9846
¸
K =
0.2820 0.3041
0.7557 0.0296
0.1919 0.1317
0.3797 0.6538
R
e
=
·
0.1253 0.1166
0.1166 0.2170
¸
As inputs, two white noise sequences with 1000
samples were generated.
Figure 1: Singular values (MOESP approach).
Comparing the results obtained with the proposed
and the original algorithms, we have similar values,
for the matrices, frequency responses (figures 2 and
3) and for the simulation errors (van Overschee and
de Moor, 1996):
²
old
=
·
15.0104
14.6752
¸
(%)
²
new
=
·
15.0027
14.6731
¸
(%)
Figure 2: The frequency response of the original and the
estimated system (MOESP approach).
Figure 3: The frequency response of the original and the
estimated system (proposed approach, Van Overschee e De
Moor-based).
6 CONCLUSIONS
In this paper we describe an alternative approach for
the estimation of matrices B and D in subspace iden-
tification. If we consider the methods used nowa-
days, both ”simulation error method” and ””predic-
tion error method” do not apply the subspace ideas,
since they ”go back to the data” (van Overschee and
de Moor, 1996). As to the robust method proposed
by Van Overschee and De Moor (van Overschee and
de Moor, 1996), it is the slowest of the existing meth-
ods, due to its numerical complexity. We have shown
that this robust subspace method can be just expressed
as an orthogonal projection of the future outputs on
the orthogonal complement of the column space of
the extended observability matrix thus providing a
new sort of simpler (but equally accurate) subspace
algorithms.
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
174
REFERENCES
Chiuso, A. and Picci, G. (2001). Asymptotic variance of
subspace estimates. In Proc. CDC’01 - 40th Con-
ference on Decision and Control, Orlando, Florida,
USA, pages 3910–3915.
Delgado, C., dos Santos, P. L., and de Carvalho, J. L. M.
(2004). Simplified estimation of matrices b and d
in subspace identification algorithm. In Proc. Con-
trolo’04 6th Portuguese Conference on Automatic
Control, Faro, Portugal.
dos Santos, P. L. and de Carvalho, J. L. M. (2003). Improv-
ing the numerical efficiency of the b and d estimates
produced by the combined deterministic-stochastic
subspace identification algorithms. In Proc. CDC’03
42nd Conference on Decision and Control, Maui,
Hawaii, USA, pages 3473–3478.
dos Santos, P. L. and de Carvalho, J. L. M. (2004). A new
insight to the a and c matrices extraction in a MOESP
type subspace identification algorithm. In Proc. Con-
trolo’04 6th Portuguese Conference on Automatic
Control, Faro, Portugal.
Kailath, T. (1980). Linear Systems. Prentice-Hall.
Ljung, L. (1987). System Identification: Theory for the
User. Prentice-Hall.
van Overschee, P. and de Moor, B. (1996). Subspace Iden-
tification for linear systems. Kluwer Academic Pub-
lishers.
EXPLORING THE LINEAR RELATIONS IN THE ESTIMATION OF MATRICES B AND D IN SUBSPACE
IDENTIFICATION METHODS
175