
(Ferrari-Trecate and De Nicolao, 2001, Young and
Pedregal, 1999) as well as to the estimation of time-
variable parameters (Young et al., 2001).
In this paper we treat the least-squares linear
estimation problem and the fixed-interval smooth-
ing algorithm is derived under an innovation ap-
proach. This approach provides an expression for the
smoother as the sum of the filter and another term,
uncorrelated with it, which can be obtained from a
backward-time algorithm.
The filtering and fixed-interval smoothing algo-
rithms are applied to a simulated observation model
where the signal cannot be missing in two consecutive
observations, situation which can be covered by the
correlation form considered in the theoretical study.
2 ESTIMATION PROBLEM
We consider the least-squares (LS) linear estimation
problem of a discrete-time signal from noisy uncer-
tain observations described by
y(k) = θ(k)z(k) + v(k) (1)
where the involved processes satisfy:
(I) The signal process {z(k); k ≥ 0} has zero mean
and its autocovariance function is expressed in a semi-
degenerate kernel form, that is,
K
z
(k, s)=E[z(k)z
T
(s)]=
½
A(k)B
T
(s), 0 ≤ s ≤ k
B(k)A
T
(s), 0 ≤ k ≤ s
where A and B are known n × M
0
matrix functions.
(II) The noise process {v(k); k ≥ 0} is a zero-mean
white sequence with known autocovariance function,
E[v(k)v
T
(s)] = R
v
(k)δ
K
(k − s).
(III) The multiplicative noise {θ(k); k ≥ 0}
is a sequence of Bernoulli random variables with
P [θ(k) = 1] =
θ(k) and autocovariance function
K
θ
(k, s) =
½
0, |k − s| ≥ 2
E[θ(k)θ(s)] − θ(k)θ(s), |k − s| < 2
(IV) The processes {z(k); k ≥ 0}, {v(k); k ≥ 0}
and {θ(k); k ≥ 0} are mutually independent.
The purpose is to obtain a fixed-interval smooth-
ing algorithm; concretely, assuming that the obser-
vations up to a certain time L are available, our
aim is to find recursive formulas which allow to ob-
tain the estimators of the signal, z(k), at any time
k ≤ L. For this purpose, we will use an innova-
tion approach. If by(k, k − 1) denotes the LS lin-
ear estimator of y(k) based on {y(1), . . . , y(k − 1)},
ν(k) = y(k) − by(k, k − 1) represents the innova-
tion contained in the observation y(k), that is, the
new information provided by y(k) after its estima-
tion from the previous observations. It is known that
the LS linear estimator of z(k) based on the observa-
tions {y(1), . . . , y(L)}, which is denoted by bz(k, L),
is equal to the LS linear estimator based on the in-
novations {ν(1), . . . , ν(L)}. The advantage of con-
sidering the innovation approach to address the LS
estimation problem comes from the fact that the in-
novations constitute a white process; then, by denot-
ing Π(i) = E[ν(i)ν
T
(i)], the Orthogonal Projection
Lemma (OPL) leads to
bz(k, L) =
L
X
i=1
E[z(k)ν
T
(i)]Π
−1
(i)ν(i). (2)
In view of (2), the first step to obtain the estimators
is to establish an explicit formula for the innovations,
which is presented in Theorem 1. Afterwards, in the
next section, we present recursive formulas for the
fixed-interval smoother, bz(k, L), k < L, including
that of the filter, bz(k, k). These formulas have been
derived by decomposing (2) as the filter and a correc-
tion term uncorrelated with it, and obtaining recursive
expressions for both terms from the OPL.
2.1 Innovation process
When the variables {θ(k); k ≥ 0} modelling the un-
certainty are independent all the information prior to
time k which is required to estimate y(k) is provided
by the one-stage predictor of the signal, bz(k, k − 1).
However, for the problem at hand, the correlation be-
tween θ(k − 1) and θ(k), which must be considered
to estimate y(k), is not contained in bz(k, k − 1). Con-
cretely, as it is indicated in Theorem 1, in this case the
innovation is obtained by a linear combination of the
new observation, the predictor of the signal and the
previous innovation.
Theorem 1. Under hypotheses (I)-(IV), the innova-
tion process associated with the observations given in
(1) satisfies
ν(k) = y(k)−
θ(k)A(k)O(k − 1)−K
θ
(k, k − 1)
×A(k)B
T
(k−1)Π
−1
(k−1)ν(k −1), k ≥ 2
ν(1) = y(1)
where the vectors O(k) are calculated from
O(k) = O(k − 1) + J(k)Π
−1
(k)ν(k), k ≥ 1
O(0) = 0
being
J(k) = θ(k)
£
B
T
(k)−r(k −1)A
T
(k)
¤
−K
θ
(k, k−1)
×J(k−1)Π
−1
(k−1)B(k −1)A
T
(k), k ≥ 2
J(1) = θ(1)B
T
(1)
and Π(k) the covariance matrix of the innovation,
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
204