there is LOOB and a closed loop estimator when the
observation arrives at destination. This will aim in
fact at designing an estimator which is strongly time-
varying and stochastic in nature. In order to avoid
random sampling and stochastic behaviour of the de-
signed Kalman filter, (Khan and Gu, 2009b) has pro-
posed a few approaches to compensate the loss of ob-
servations in the state estimation through Linear Pre-
diction.
Throughout this paper we shall call the variables
in the case of loss of data as “compensated variables”,
e.g. P
{2}
k
is called the compensated filtered error co-
variance at time step k with loss of observation. The
rest of the paper is organized as follows. The theory of
the Linear Prediction Coefficient (LPC) is overviewed
in Section II. In Section III we discuss the proposed
sub-optimal Kalman filter with loss of data. The
mass-spring-dashpot case study is given in Section IV.
Simulation results are presented in Section V. Section
VI summarizes our conclusions.
2 THEORY OF LINEAR
PREDICTION COEFFICIENT
Linear prediction (LP) is an integral part of signal re-
construction e.g. speech recognition. The fundamen-
tal idea behind this technique is that the signal can be
approximated as a linear combination of past samples,
see e.g. (Rabiner and Juang, 1993). Whenever there
is the loss of observation, a signal window is selected
to approximate the lost-data. The weights assigned
to this data are computed by minimizing the mean
square error. These weights are termed as Linear Pre-
diction Coefficients. Out of the two leading LPC tech-
niques, (namely Internal and External LPC), we shall
develop and employ External LPC for LOOB, which
suits to our problem with constraints:
• The signal statistical properties are assumed to
vary slowly with time.
• Loss window should not be “sufficiently long”,
otherwise the prediction performance will be in-
ferior.
In this paper, the LP technique is termed as modi-
fied because in conventional LPC there is no defined
strategy to account the number of previous data, while
have defined several simple-to-implement algorithms
to decide that factor. One of it would be explain the
subsequent section.
Let us assume that the dynamics of the LTI system
is given in discrete time and that the data or observa-
tion is lost at time instant k. The LP is performed as:
¯z
k
=
n
∑
i=1
α
i
z
k−i
(1)
where ¯z
k
is called “compensated observation” and
′
α
′
’s represent weights of linear prediction coeffi-
cients for the previous observations and n denotes the
order of the LPC filter. Generally speaking, it depicts
the maximum number of previous observations con-
sidered for computation of compensated observation
vector. Also, n is required to be chosen appropriately
- higher value of n does not guaranty an accurate ap-
proximation of the signal but rather an optimal value
of n decides an efficient approximation and hence pre-
diction, see (Rabiner and Juang, 1993).
3 DESIGN OF SUB-OPTIMAL KF
WITH LOSS OF DATA
Let us assume that the process under consideration is
to be run by random noise signal whose mean and
covariance are independent of time, i.e. wide-sense
stationary process, given as
x
k
= Ax
k−1
+ Bu
k−1
+ L
d
ξ
k
(2)
z
k
= Cx
k
+ v
k
(3)
where A, B and C have appropriate dimensions, and
x, u, z, ξ and v are state, input, sensed output, plant
disturbanceand measurement noise, respectively. The
plant noise ξ and sensor noise v are assumed to be
zero mean white gaussian noises.
CKF computes the priori state estimation which is
solely based on (2). This priori estimation is thereby
updated with newly resumed observation at each time
instant. In the subsequent section, the performance of
CKF is tested and verified in a mass-spring-dashpot
system which help illustrate the proposed algorithm.
If the observation is not available due to any of the
reason mention earlier, the compensated observations
are calculated through (1).
The posteriori state estimation using this compen-
sated observation will be
¯x
k|k
= x
k|k−1
+
¯
K
k
(¯z
k
− ˆz
k
) (4)
The corresponding a posterior error for this estimate
is
e
k|k
= x
k
− ¯x
k|k
= x
k
− x
k|k−1
−
¯
K
k
(¯z
k
− ˆz
k
)
= e
k|k−1
−
¯
K
k
(¯z
k
− ˆz
k
) (5)
where x
k
is the actual state of the system. Conser-
vatively, the cost function of the Kalman filter is ob-
tained based on this a posterior error of the state esti-
mation.
ICINCO 2010 - 7th International Conference on Informatics in Control, Automation and Robotics
202