RECURSIVE EXTENDED COMPENSATED LEAST SQUARES
BASED ALGORITHM FOR
ERRORS-IN-VARIABLES IDENTIFICATION
Tomasz Larkowski, Jens G. Linden and Keith J. Burnham
Control Theory and Applications Centre, Coventry University, Priory Street, Coventry, CV1 5FB, U.K.
Keywords:
Bias compensation, Errors-in-variables, Parameter estimation, Recursive algorithms, System identification.
Abstract:
An algorithm for the recursive identification of single-input single-output linear discrete-time time-invariant
errors-in-variables system models in the case of white input and coloured output noise is presented. The
approach is based on a bilinear parametrisation technique which allows the model parameters to be estimated
together with the auto-correlation elements of the input/output noise sequences. In order to compensate for the
bias in the recursively obtained least squares estimates, the extended bias compensated least squares method
is used. An alternative for the online update of the associated pseudo-inverse of the extended observation
covariance matrix is investigated, namely an approach based on the matrix pseudo-inverse lemma and an
approach based on the recursive extended instrumental variables technique. A Monte-Carlo simulation study
demonstrates the appropriateness and the robustness against noise of the proposed scheme.
1 INTRODUCTION
The errors-in-variables (EIV) approach forms an ex-
tension of the standard output error system setup
in which it is postulated that only the output mea-
surements are uncertain. In the EIV framework all
measured signals, hence, including the system in-
put, are assumed to be contaminated with noise, see
(S¨oderstr¨om, 2007) for the recent survey on this sub-
ject. The EIV framework can offer advantages over
the classical approach, mainly when the description
of the internal laws governing a system is of prime in-
terest, e.g. application areas in chemistry, image pro-
cessing, fault detection etc., see (S¨oderstr¨om, 2007;
Markovsky and Van Huffel, 2007) for further details.
One of the EIV techniques that has been shown
to be robust and to yield relatively precise estimates
is the extended compensated least squares (ECLS)
method. The approach is based on the extended bias
compensated least squares (EBCLS) and utilises sep-
arable nonlinear least squares to solve the resulting
overall identification problem. The method was first
proposed in (Ekman, 2005a), which considered the
case of white input and output noise sequences and
subsequently extended to handle the case of coloured
output noise in (Ekman et al., 2006). Further anal-
ysis, considering a generalised framework, has been
carried out in (Mahata, 2007).
Alternatively, by exploiting the property that the
overall optimisation problem is bilinear in the un-
knowns, see (Ljung, 1999), which in this case corre-
sponds to the model parameters and the input/output
noise auto-correlation elements, the principle of bi-
linear parametrisation can be utilised. The resulting
scheme, termed here the extended bilinear parametri-
sation method (EBPM) involves solving iteratively
two ordinary least squares problems, see (Larkowski
et al., 2008) for details. Although the quality of the
parameters obtained by the EBPM is comparable to
the quality of the estimates yielded by the ECLS, an
important distinction is that the EBPM is significantly
less computationally demanding than the ECLS tech-
nique.
The bilinear parametrisation method was first
utilised to solve the EIV identification problem in
a recursive manner in (Ekman, 2005b) for the case
of white input and output noise. It has also been
exploited in (Ikenoue et al., 2008) for the case of
coloured input and output noise sequences and for the
purpose of offline as well as online estimation. How-
ever, in both cases the term ‘bilinear parametrisation
has not been explicitly stated. In (Ekman, 2005b) the
constructed recursive algorithm is not computation-
ally attractive, since its complexity at each iteration is
actually greater than that of the corresponding batch
algorithm applied in an offline manner at each recur-
142
Larkowski T., G. Linden J. and J. Burnham K. (2009).
RECURSIVE EXTENDED COMPENSATED LEAST SQUARES BASED ALGORITHM FOR ERRORS-IN-VARIABLES IDENTIFICATION.
In Proceedings of the 6th International Conference on Informatics in Control, Automation and Robotics - Signal Processing, Systems Modeling and
Control, pages 142-147
DOI: 10.5220/0002211901420147
Copyright
c
SciTePress
sion. Whereas, in (Ikenoue et al., 2008) due to a spe-
cial choice of the instruments, the resulting algorithm
is not causal, in general, hence its recursive imple-
mentation yields delayed estimates.
In this paper a recursive realisation of the EBPM
is presented for a discrete-time linear time-invariant
(LTI) single-input single-output (SISO) system model
in the case of the white input and coloured output
noise and it is demonstrated that the above men-
tioned shortcomings may be avoided. The bias of
the recursively calculated least squares (LS) estima-
tor is removed at each recursion via the extended bias
compensated least squares (EBCLS) technique. The
online update of the pseudo-inverse of the overde-
termined observation matrix is realised by consider-
ing an alternative, namely an approach based on the
pseudo-inverse lemma, see (Feng et al., 2001) and an
approach based on the recursive extended instrumen-
tal variables technique, see (Friedlander, 1984). The
two resulting algorithms are analysed and compared
with their offline counterpart via a Monte-Carlo sim-
ulation study. It is shown that the instrumental vari-
ables approach is the more preferable due to its supe-
rior robustness and improved convergence properties,
in general.
2 NOTATION AND PROBLEM
STATEMENT
Consider a discrete-time LTI SISO system repre-
sented by the difference equation
A(q
1
)y
0
k
= B(q
1
)u
0
k
, (1)
where the polynomials A(q
1
) and B(q
1
) are given
by
A(q
1
) , 1+ a
1
q
1
+ . . . + a
n
a
q
n
a
, (2a)
B(q
1
) , b
1
q
1
+ . . . + b
n
b
q
n
b
(2b)
with q
1
being the backward shift operator, defined
by q
1
x
k
, x
k1
. The unknown noise-free input and
noise-free output signals denoted u
0
k
and y
0
k
, respec-
tively, are related to the available noisy variables, de-
noted u
k
and y
k
, such that
u
k
= u
0
k
+ ˜u
k
, y
k
= y
0
k
+ ˜y
k
, (3)
where ˜u
k
and ˜y
k
denote the input and output measure-
ment noise sequences, respectively. The following
standard assumptions, see e.g. (Ekman et al., 2006),
are introduced:
A1 The LTI system (1) is asymptotically stable, i.e.
A(q
1
) has all zeros inside the unit circle.
A2 All system modes are observable and control-
lable, i.e. A(q
1
) and B(q
1
) share no common
factors.
A3 The system structure, i.e. n
a
and n
b
, is known a
priori and n
a
n
b
.
A4 The true input u
0
k
is a zero mean, ergodic random
sequence persistently exciting and of sufficiently
high order, i.e. at least of order n
a
+ n
b
.
A5a The additive input noise sequence ˜u
k
of un-
known variance σ
˜u
is an ergodic zero mean white
process.
A5b The additive output noise sequence ˜y
k
is
an ergodic zero mean process characterised
by an unknown auto-covariance sequence
{r
˜y
(0), r
˜y
(1), . . .}.
A6 The input/output noise sequences are mutually
uncorrelatedand uncorrelated with signals u
0
k
and
y
0
k
.
By postulating that the output noise sequence exhibits
an arbitrary degree of correlation allows for measure-
ment sensor uncertainties to be taken into account, as
well as potential disturbances in the process.
The system parameter vector is denoted
θ ,
a
T
b
T
T
R
n
θ
, (4a)
a ,
a
1
. . . a
n
a
T
R
n
a
, (4b)
b ,
b
1
. . . b
n
b
T
R
n
b
, (4c)
where n
θ
= n
a
+ n
b
. The extended regressor vectors
for the k-th measured data are defined as
¯
ϕ
k
,
y
k
ϕ
T
k
T
R
n
θ
+1
, (5a)
¯
ϕ
y
k
,
y
k
ϕ
T
y
k
T
R
n
a
+1
, (5b)
where
ϕ
k
,
ϕ
T
y
k
ϕ
T
u
k
T
R
n
θ
, (5c)
ϕ
y
k
,
y
k1
. . . y
kn
a
T
R
n
a
, (5d)
ϕ
u
k
,
u
k1
. . . u
kn
b
T
R
n
b
. (5e)
The noise contributions in the corresponding regres-
sor vectors are denoted by a tilde, i.e.
˜
[·], whereas the
noise-free signals are denoted by a zero subscript, i.e.
[·]
0
. From (3) it follows that
¯
ϕ
k
=
¯
ϕ
0
k
+
˜
¯
ϕ
k
. (6)
The notation Σ
gd
is used as a general notion for the
covariance matrix of the vectors g
k
and d
k
, whereas
ξ
gf
is utilised for a covariance vector with f
k
being
a scalar. The corresponding estimates are denoted by
a hat. In addition, 0
g×d
denotes the null matrix of
arbitrary dimension g × d and a single index is used
RECURSIVE EXTENDED COMPENSATED LEAST SQUARES BASED ALGORITHM FOR
ERRORS-IN-VARIABLES IDENTIFICATION
143
in the case of a column vector as well as in the case of
a square matrix, e.g. the identity matrix I
g
. The auto-
correlation elements, denoted r
˜y
(·) are defined as
r
˜y
(τ) , E [ ˜y
k
˜y
kτ
], (7)
where E[·] is the expected value operator. Introducing
ρ ,
ρ
T
y
σ
˜u
T
R
n
a
+2
, (8a)
ρ
y
,
r
˜y
(0) . . . r
˜y
(n
a
)
T
R
n
a
+1
, (8b)
the dynamic identification problem in the EIV frame-
work considered here is formulated as:
Problem 1. (Dynamic EIV identification problem)
Given N samples of the measured signals, i.e. {u
k
}
N
k=1
and {y
k
}
N
k=1
, determine the vector
Θ ,
θ
T
ρ
T
T
R
n
θ
+n
a
+2
. (9)
3 REVIEW OF APPROACHES
This section briefly reviewsthe EBCLS technique and
the offline EBPM algorithm.
3.1 Extended Bias Compensated Least
Squares
Denoting an estimate by
ˆ
[·], a solution of the system
(1)-(3) in the LS sense is given by
ˆ
θ
LS
=
ˆ
Σ
xϕ
ˆ
ξ
xy
, (10)
where [·]
is the pseudo inverse operator defined by
A
, (A
T
A)
1
A
T
, x
k
R
n
x
denotes an arbitrary in-
strumental vector with n
x
n
θ
. Due to the mea-
surement noise, unless the elements of x
k
are uncor-
related with
˜
ϕ
k
, the solution obtained is biased. In
order to achieve an unbiased estimate of θ, a bias
compensation procedure is required to be carried out
(S¨oderstr¨om, 2007). This consideration yields the
EBCLS estimator defined as
ˆ
θ
EBCLS
,
ˆ
Σ
xϕ
Σ
˜x
˜
ϕ
ˆ
ξ
xy
ξ
˜x˜y
. (11)
Note that Σ
˜x
˜
ϕ
and ξ
˜x˜y
, in general, are functions of ρ,
which, in turn, will depend on the elements contained
in the instrument vector x
k
.
3.2 Extended Bilinear Parametrisation
Method
The bilinear parametrisation method is applicable
for problems that are bilinear in the parameters, see
(Ljung, 1999) for details, and it is presented here
in accordance with the development proposed in
(Larkowski et al., 2008).
Based on the EBCLS rule given by (11) a bilinear
(in the parameters) cost function can be formulated,
i.e.
ˆ
Θ = argmin
Θ
V(Θ), (12)
where
V(Θ) ,
ˆ
ξ
xy
ξ
˜x˜y
ˆ
Σ
xϕ
Σ
˜x
˜
ϕ
θ
2
2
. (13)
Note that the instruments x
k
must be chosen such that
the resulting problem is soluble, i.e. the total number
of unknowns is less than or equal to the total num-
ber of equations, see (Larkowski et al., 2008) for a
detailed treatment. Alternatively, utilising the bilin-
earity property, (13) can be re-expressed as
V(Θ) =
ˆ
ξ
xy
ˆ
Σ
xϕ
θWρ
2
2
, (14)
where W , S
1
S
2
(θ) R
n
x
×(n
a
+2)
such that S
1
ρ ,
ξ
˜x˜y
and S
2
(θ)ρ , Σ
˜x
˜
ϕ
θ.
It is observed that for xed ρ (i.e. the expres-
sions Σ
˜x
˜
ϕ
and ξ
˜x ˜y
) the cost function (13) is linear in
θ. Analogously, for fixed θ (i.e. the matrix W) the
cost function (14) is linear in ρ. Consequently, a nat-
ural approach is to treat (13) and (14) as separate LS
problems, cf. (Ljung, 1999). This leads to a two-step
algorithm where the LS solutions of the sub-problems
defined by (13) and (14) are obtained at each iteration.
Furthermore, local convergence of such algorithm is
guaranteed, see (Ljung, 1999).
4 RECURSIVE EXTENDED
BILINEAR
PARAMETRISATION METHOD
This section presents the proposed recursive realisa-
tion of the EBPM technique, denoted REBPM. First
the problem of an online update of the parameter vec-
tor is addressed. Subsequently, two approaches for
updating the pseudo-inverse of the extended obser-
vation matrix are considered. Finally, the problem
of calculating the input noise variance and the auto-
correlation elements of the output noise is discussed.
4.1 Recursive Update of Parameter
Vector
Considering (11) and by making use of (10) it follows
that
θ =
ˆ
θ
LS
+
ˆ
Σ
xϕ
Σ
˜x
˜
ϕ
θ ξ
˜x˜y
. (15)
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
144
It is remarked that the expression
ˆ
Σ
xϕ
Σ
˜x
˜
ϕ
θ ξ
˜x˜y
represents the bias of the LS estimator. Since the true
value of θ on the right hand side of (15) is unknown, a
natural approach is to utilise the most recent estimate,
i.e. the previous value. This leads to the following
recursive EBCLS scheme
ˆ
θ
k
EBCLS
=
ˆ
θ
k
LS
+
ˆ
Σ
k
xϕ
Σ
k
˜x
˜
ϕ
ˆ
θ
k1
EBCLS
ξ
k
˜x˜y
. (16)
Despite that an inevitable error is introduced by as-
suming
ˆ
θ
k
EBCLS
ˆ
θ
k1
EBCLS
, the above approach also
known as the stationary iterativeLS principle (Bj¨orck,
1996), has been successfully employed in several re-
cursive as well as iterative algorithms.
4.2 Recursive Update of Pseudo-inverse
Considering equation (16), it is observed that a
recursive update of the pseudo-inverse of
ˆ
Σ
k
xϕ
as well
as of the LS estimate, i.e.
ˆ
θ
k
LS
, is required. This
problem can be tackled by two approaches described
below.
Approach based on the Matrix Pseudo-inverse
Lemma - REBPM
1
. The first, i.e. direct approach
is to utilise an extension of the matrix inverse lemma,
namely the matrix pseudo-inverse lemma, see (Feng
et al., 2001). This allows the recursive computation of
the expression
ˆ
Σ
xϕ
as well as the corresponding
ˆ
θ
LS
.
The algorithm can be summarised as follows:
ˆ
θ
k
LS
=
ˆ
θ
k1
LS
+ L
k
y
k
ϕ
T
k
ˆ
θ
k1
LS
, (17a)
L
k
=
ˆ
Σ
k1
xϕ
x
k
k 1+ ϕ
T
k
ˆ
Σ
k1
xϕ
x
k
, (17b)
ˆ
Σ
k
xϕ
=
k
k 1
ˆ
Σ
k1
xϕ
L
k
ϕ
T
k
ˆ
Σ
k1
xϕ
, (17c)
ˆ
Σ
k
xϕ
=
ˆ
Σ
k1
xϕ
+
1
k
x
k
ϕ
T
k
ˆ
Σ
k1
xϕ
, (17d)
ˆ
ξ
k
xy
=
ˆ
ξ
k1
xy
+
1
k
x
k
y
k
ˆ
ξ
k1
xy
. (17e)
The main shortcoming of the pseudo-inverse ap-
proach when dealing with practical applications re-
sults from its relatively high sensitivity with respect
to the initialisation of the pseudo-inverse of the ma-
trix
ˆ
Σ
k
xϕ
. This issue is not trivial and can lead to a
divergence of the overall algorithm. In order to appro-
priately initialise the expression
ˆ
Σ
k
xϕ
, it is required
that the pseudo-inverseis computed offlineafter an ar-
bitrary number, denoted α, of measurements is taken
and before the recursive algorithm commences oper-
ation.
Remark 1. It is noted that the uniqueness of
ˆ
Σ
xϕ
in the case of recursive approaches is not always
guaranteed when utilising equations (17), see (Lin-
den, 2008) for further details. As a consequence, the
corresponding estimate of θ
k
LS
may not represent the
optimal, in terms of the minimum variance, solution
to the overdetermined set of equations given by (10).
Approach based on Extended Instrumental Vari-
ables - REBPM
2
. An alternative to employing the
matrix pseudo-inverse lemma, an approach based on
the recursive extended instrumental variables tech-
nique, see (Friedlander, 1984), can be utilised in or-
der to obtain, albeit indirectly, a recursive update of
ˆ
Σ
k
xϕ
. Define
P
k
=
ˆ
Σ
k
xϕ
T
ˆ
Σ
k
xϕ
1
. (18)
In this approach the expression P
k
is updated recur-
sively, rather than the total pseudo-inverse
ˆ
Σ
k
xϕ
.
The algorithm can be summarised as:
ˆ
θ
k
LS
=
ˆ
θ
k1
LS
+ K
k
v
k
φ
T
k
ˆ
θ
k1
LS
, (19a)
K
k
= P
k1
φ
k
Λ
k
+ φ
T
k
P
k1
φ
k
1
, (19b)
Λ
k
=
x
T
k
x
k
1
1 0
, (19c)
φ
k
=
w
k
1
k
ϕ
k
, (19d)
w
k
=
k 1
k
ˆ
Σ
k1
xϕ
T
x
k
, (19e)
v
k
=
1
k
(k 1)x
T
k
ˆ
ξ
k1
xy
y
k
, (19f)
P
k
= P
k1
K
k
φ
T
k
P
k1
(19g)
with
ˆ
Σ
k
xϕ
and
ˆ
ξ
k
xy
updated as in equations (17d) and
(17e), respectively. Since it is the expression P
k
which
is obtained recursively, hence, in order to calculate
ˆ
Σ
k
xϕ
, for the recursive bias compensation equation
(16), an additional matrix producthas to be computed,
i.e.
ˆ
Σ
k
xϕ
= P
k
ˆ
Σ
k
xϕ
T
. (20)
Consequently, the pseudo-inverse of
ˆ
Σ
k
xϕ
is obtained
in an indirect manner. Moreover, note that the recur-
sive algorithm (19a) requires an inverse of the matrix
of dimension 2 × 2 at each recursion. This, however,
does not significantly increase the associated compu-
tational burden. On the other hand, the important ad-
vantages of this algorithm are that, firstly, it can be
easily initialised and, secondly, it is relatively insensi-
tive to the quality of the initial values. With reference
RECURSIVE EXTENDED COMPENSATED LEAST SQUARES BASED ALGORITHM FOR
ERRORS-IN-VARIABLES IDENTIFICATION
145
to (Friedlander, 1984), in the case of no a priori in-
formation the initialisation can be performed as
Σ
0
zφ
=µ
I
n
θ
0
(n
x
n
θ
)×n
θ
, P
0
k
=
1
µ
2
I
n
θ
, θ
0
LS
=0
n
θ
×1
. (21)
The scalar parameter µ allows the speed of conver-
gence to be adjusted, hence affects the ‘smoothness’
of
ˆ
Θ (i.e. large value of µ corresponds to the slow
convergence and smooth parameters). Further algo-
rithmic details ensuring that the update of the matrix
P
k
, given by (19g), is (semi-) positive definite are ad-
dressed in (Friedlander, 1984).
4.3 Determination of Noise
Auto-correlation Elements
Since the matrix W
k
is sparse, in general, the compu-
tational effort involved in its pseudo-inverse is negli-
gible when compared to that of
ˆ
Σ
k
xϕ
. Therefore, it is
the pseudo-inverse of
ˆ
Σ
k
xϕ
which forms a crucial bot-
tleneck of the overall algorithm. Consequently, a re-
cursive computation of
ˆ
ρ
k
is not considered here and
its estimate is determined offline at each recursion by
solving (14) in the LS sense, i.e.
ˆ
ρ
k
=
W
k
ˆ
ξ
k
xy
ˆ
Σ
k
xϕ
ˆ
θ
k
EBCLS
. (22)
5 SIMULATION STUDIES
This section addresses a numerical analysis of the
two proposed recursive realisations of the EBPM ap-
proach, namely REBPM
1
and REBPM
2
, when ap-
plied for the purpose of identifying a SISO discrete-
time LTI second order system within the EIV frame-
work. The system to be identified is described by
θ =
1.5 0.7 1.0 0.5
T
(23)
with the input generated by
u
0
k
= 0.5u
0
k1
+ β
k
, (24)
where β
k
is a white, zero mean sequence of unity vari-
ance. The input noise sequence is zero mean, white of
variance σ
˜u
and the coloured output noise sequence is
generated by
˜y
k
= 0.7˜y
k1
+ γ
k
, (25)
where γ
k
is zero mean, white and of variance σ
γ
. In
the case of both algorithms the instrumental vector is
based on the instruments proposed in (Ekman et al.,
2006), i.e. built from delayed inputs and delayed out-
puts, and utilised with n
x
= 10.
Table 1: Results of the estimation of model parameters and
auto-correlation elements of the noise sequences.
true EBPM REBPM
1
REBPM
2
SNR 11dB
a
1
1.500 1.501±0.041 1.504±0.051 1.494±0.023
a
2
0.700 0.701±0.045 0.705±0.056 0.694±0.024
b
1
1.000 0.998±0.039 0.996±0.045 1.001±0.038
b
2
0.500 0.500±0.072 0.495±0.083 0.508±0.051
σ
˜u
0.100 0.100±0.054 0.095±0.065 0.124±0.052
r
˜y
(0) 3.922 3.273±2.349 2.647±3.902 3.834±1.376
r
˜y
(1) 2.745 2.168±1.938 1.631±3.275 2.618±1.174
r
˜y
(2) 1.922 1.540±0.949 1.250±1.612 1.721±0.715
e
1
0.001±0.001 0.004±0.005 0.001± 0.001
e
2
0.097±0.120 1.187±3.464 0.143± 0.197
Λ 0 2 0
T 1.381±0.102 1.663±0.134
The robustness of the two algorithms is examined
via a Monte-Carlo simulation study comprising of
100 runs. The mean values of the estimates obtained
at the last recursion, i.e. for k = N are recorded and
compared with the corresponding results produced by
the offline EBPM. The the overall quality of the esti-
mators is assessed via the following two performance
criteria:
e
1
,
ˆ
θ
N
λ
θ
2
2
, e
2
,
ˆ
ρ
N
λ
ρ
2
2
, (26)
where λ denotes the λ-th Monte-Carlo run. Prior to
the calculation of the performance indeces e
1
and e
2
the possible outliers are removed from the data. An
estimate is classified as an outlier if
ˆ
θ
N
λ
2
> 10. The
number of outliers is denoted by Λ. Additionally, a
computation time in seconds, denoted T, is recorded.
The initial values of the parameters are set as fol-
lows: α = 50 for the REBPM
1
and µ = 100 for the
REBPM
2
. In order to provide a fair comparison, in
the case of the REBPM
2
, the bias compensation phase
is enabled from sample 50 onwards, although the ex-
pressions
ˆ
θ
k
LS
and
ˆ
Σ
k
xϕ
are recursively calculated
from the commencement of the algorithm. The values
of the noise parameters are chosen as σ
˜u
= 0.1 and
σ
γ
= 2.0. Consequently, the noise auto-correlation
vector is given by
ρ =
3.922 2.745 1.922 0.100
T
, (27)
which yields an approximately equal signal-to-noise
ratio (SNR) of around 11dB on both the input and
the output signals. The results expressed obtained
in terms of mean value ± standard deviation are pre-
sented in Table 1. It is observed that the mean values
of the model parameters, obtained by the algorithms,
see e
1
, are relatively accurate and close to the true val-
ues and are also characterised by acceptable standard
deviations. In the case of e
2
the estimates
ˆ
ρ are rel-
atively less precise, especially those produced by the
REBPM
1
.
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
146
In general, comparison of the two recursive real-
isations of the EBPM reveals that it is the REBPM
2
which produces the more accurate results overall.
Moreover, it is noted that in the case of the REBPM
1
the algorithm diverged twice, producing two outliers.
In terms of the computational burden, the time re-
quired by the REBPM
2
is slightly greater when com-
pared to that of REBPM
1
, i.e. the former technique is
faster by approximately 17% with respect to the latter
method.
In general, the experiments carried out seem to
suggest that the REBPM
2
is more advantageous than
the REBPM
1
due to a simpler initialisation, greater
robustness and an absence of convergence problems,
at least under the conditions considered here.
6 CONCLUSIONS
A recursive realisation of the extended bilinear
parametrisation method for the identification of dy-
namical linear discrete-time time-invariant single-
input single-output errors-in-variables models has
been proposed. Two alternative approaches for the
online update of the pseudo-inverse of the extended
observation covariance matrix have been considered.
The first approach is based on the pseudo-inverse ma-
trix lemma, whereas the second is constructed within
the framework of the extended instrumental variables
technique. For the cases considered, the two resulting
algorithms appear to be relatively robust and they are
also found to yield precise estimates of the model pa-
rameters. Results suggest that the instrumental vari-
ables based approach would appear to be the superior
of the two developed algorithms.
REFERENCES
Bj¨orck,
˚
A. (1996). Numerical Methods for Least Squares
Problems. SIAM, Philadelphia.
Ekman, M. (2005a). Identification of linear systems with
errors in variables using separable nonlinear least
squares. In Proc. of 16th IFAC World Congress,
Prague, Czech Republic.
Ekman, M. (2005b). Modeling and Control of Bilinear Sys-
tems: Applications to the Activated Sludge Process.
PhD thesis, Uppsala University, Sweden.
Ekman, M., Hong, M., and S¨oderstr¨om, T. (2006). A sep-
arable nonlinear least-squares approach for identifica-
tion of linear systems with errors in variables. In 14th
IFAC Symp. on System Identification, Newcastle, Aus-
tralia.
Feng, D., Zhang, H., Zhang, X., and Bao, Z. (2001). An
extended recursive least-squares algorithm. Signal
Proc., 81(5):1075–1081.
Friedlander, B. (1984). The overdetermined recursive in-
strumental variable method. IEEE Trans. on Auto-
matic Control, 29(4):353–356.
Ikenoue, M., Kanae, S., Yang, Z., and Wada, K.
(2008). Bias-compensation based method for errors-
in-variables model identification. In Proc. of 17th
IFAC World Congress, pages 1360–1365, Seul, South
Korea.
Larkowski, T., Linden, J. G., Vinsonneau, B., and Burn-
ham, K. J. (2008). Identification of errors-in-variables
systems via extended compensated least squares for
the case of coloured output noise. In The 19th Int.
Conf. on Systems Engineering, pages 71–76, Las Ve-
gas, USA.
Linden, J. G. (2008). Algorithms for recursive Frisch
scheme identification and errors-in-variables filter-
ing. PhD thesis, Coventry University, UK.
Ljung, L. (1999). System Identification - Theory for the
User. Prentice Hall PTR, New Jersey, USA, 2nd edi-
tion.
Mahata, K. (2007). An improved bias-compensation ap-
proach for errors-in-variables model identification.
Automatica, 43(8):1339–1354.
Markovsky, I. and Van Huffel, S. (2007). Overview of to-
tal least-squares methods. Signal Proc., 87(10):2283–
2302.
S¨oderstr¨om, T. (2007). Errors-in-variables methods in sys-
tem identification. Automatica, 43(6):939–958.
RECURSIVE EXTENDED COMPENSATED LEAST SQUARES BASED ALGORITHM FOR
ERRORS-IN-VARIABLES IDENTIFICATION
147