Then, the paper is structured as follows. In the
next section, a general formulation of the linear least-
squares filtering and prediction problem is consid-
ered. Finally, in Section 3, the recursive algorithms
for the filter and all types of predictors as well as their
error covariances are derived.
2 PROBLEM STATEMENT
Let {x(t), 0 ≤ t < ∞} be a zero-mean signal vector
of dimension n which is observed through the follow-
ing equation:
y(t) = x(t) + v(t), 0 ≤ t < ∞
where y(t) represents the n-dimensional observation
vector and v(t) is a centered white observation noise
with covariance function E[v(t)v
′
(s)] = r δ(t − s),
with r a positive definite covariance matrix of dimen-
sion n × n, and correlated with the signal.
We assume that the autocovariance function of the
signal and the cross-covariance function between the
signal and the observation noise are factorizable ker-
nels which can be expressed in the following form:
R
x
(t, s) =
A(t)B
′
(s), 0 ≤ s ≤ t
B(t)A
′
(s), 0 ≤ t ≤ s
R
xv
(t, s) =
α(t)β
′
(s), 0 ≤ s ≤ t
γ(t)λ
′
(s), 0 ≤ t ≤ s
(1)
where A(t), B(t), α(t), β(t), γ(t), and λ(t) are
bounded matrices of dimensions n × k, n × k, n × l,
n × l, n × l
′
, and n × l
′
, respectively.
We consider the problem of finding the linear least
mean-square error estimator, ˆx(t/T ), with t ≥ T , of
the signal x(t) based on the observations {y(s), s ∈
[0, T ]}. It is known that such an estimate is the or-
thogonal projection of x(t) onto H(y, t) (the Hilbert
space spanned by the process {y(s), s ∈ [0, T ]}).
Hence, ˆx(t/T ) can be expressed as a linear function
of all the observed data of the form
ˆx(t/T ) =
Z
T
0
h(t, s, T )y(s)ds, 0 ≤ s ≤ T ≤ t
(2)
As a consequence of the orthogonal projection the-
orem, we obtain that the impulse response function
h(t, s, T ) must satisfy the Wiener-Hopf equation
R
xy
(t, s) =
Z
T
0
h(t, σ, T )R(σ, s)dσ + h(t, s, T )r
(3)
for 0 ≤ s ≤ T ≤ t, where R
xy
(t, s) = R
x
(t, s) +
R
xv
(t, s), and R(t, s) = R
x
(t, s) + R
xv
(t, s) +
R
vx
(t, s).
From (1), it is easy to check that R
xy
(t, s) and
R(t, s) can be written as follows:
R
xy
(t, s) =
F (t)Γ
′
(s), 0 ≤ s ≤ t
G(t)Λ
′
(s), 0 ≤ t ≤ s
R(t, s) =
Λ(t)Γ
′
(s), 0 ≤ s ≤ t
Γ(t)Λ
′
(s), 0 ≤ t ≤ s
(4)
where F (t) = [A(t), α(t), 0
n×l
′
], G(t) =
[B(t), 0
n×l
, γ(t)], Λ(t) = [A(t), α(t), λ(t)], and
Γ(t) = [B(t), β(t), γ(t)] are matrices of dimensions
n × m with m = k + l + l
′
, and 0
p×q
denotes the
(p × q)-dimensional matrix whose elements are zero.
Note that, we can expressed the optimal linear fil-
ter and all kinds of predictors through the equations
(2) and (3). Specifically, by considering T = t
we have the filtering estimate ˆx(t/t), the fixed-point
predictor ˆx(t
d
/T ) is derived by taking a fixed in-
stant t = t
d
> T , for the fixed-interval predictor,
we consider a fixed observation interval [0, T
d
], with
T
d
< t, and finally the fixed-lead prediction estimate
ˆx(T + d/T ), is given by (2) and (3) with t = T + d,
for any d > 0.
Likewise, the error covariances associated with the
above estimates can be defined as
P (t/T ) = E[(x(t) − ˆx(t/T ))(x(t) − ˆx(t/T ))
′
] (5)
with a suitable estimation instant, t, and a specific ob-
servation interval [0, T ].
Therefore, in the next section, the Wiener-Hopf
equation (3) will be used, with the aid of invariant
imbedding, in order to design recursive procedures for
the filter and all kinds of predictors of the signal vec-
tor x(t) as well as their associated error covariances.
We must note that the only hypothesis assumed is that
the covariance functions involved are factorizable ker-
nels of the form (1).
3 RECURSIVE LINEAR
ESTIMATION ALGORITHMS
Under the hypotheses established in Section 2, an ef-
ficient recursive algorithm for the linear least-square
filter, and the fixed-point, fixed-interval and fixed-lead
prediction estimates of the signal and their associated
error covariance functions is presented in the follow-
ing theorem.
Theorem 1 The filter and the fixed-point, fixed-
interval and fixed-lead prediction estimates of the sig-
nal x(t) are recursively computed as follows:
ˆx(t/t) =F (t)L(t)
ˆx(t
d
/T ) =F (t
d
)L(T )
ˆx(t/T
d
) =F (t)L(T
d
)
ˆx(T + d/T ) =F (T + d)L(T )
(6)
ON THE LINEAR LEAST-SQUARE PREDICTION PROBLEM
333