Section 4 contains an example of application. Finally,
Section 5 concludes the achievements.
2 HIERARCHICAL MODELLING
CONCEPT
Assume, that there is a stochastic system observed at
discrete time instants k = 1, 2, . . . , which is to be mod-
elled. The statistical framework employs, among oth-
ers, parametric models, expressing the dependence of
the output variable of interest y
k
on a nonempty or-
dered set of observed data
D (k) = {d
κ
}
κ=0,...,k
, d
κ
∈ R
n
.
d
0
is the prior knowledge represented, for instance,
by an expert information or a noninformative distri-
bution. The task is to predict the future output y
k+1
,
e.g., for control.
Often there is a whole set of available models,
mainly those based on underlying physical principles
of the process. However, since in many applications
either the precise physical model is not available or
the lack of reliable data prevents its use, the user em-
ploys black box or grey box models. Then, either the
(subjectively) best model or models are selected and
switched or, alternatively, a rich-structure model be-
ing a union of several candidate models is built. De-
spite the potential applicability of the latter case, the
over-parametrization in combination with unreliable
measurements and their traffic delays is expected to
be fatal.
Our method provides a way around most of disad-
vantages of the mentioned approaches. We propose a
hierarchical model composed of three levels:
Low-level Models comprise arbitrary count of plau-
sible parametric models like the regressive and the
space-state ones. They are independent of each
other, but their aims are identical – modelling of
the same quantity of interest.
Averaging Model is intended for merging the infor-
mation from the low-level models. The resulting
mixture of predictive pdfs of low-level models,
weighted by their evidences, is used to evaluate
the predictions.
High-level Model – since industry has several spe-
cific requirements related, e.g., to stable control,
we add a high-level model. It provides stabiliza-
tion of the prediction process. However, the goal
of this level can differ from case to case according
to specific needs of the field of application being
addressed.
The ensuing sections describe these levels in some de-
tail.
2.1 Low-level Models
The low-level modelsexpress the relationbetween the
actual system output y
k
and the given data D (k) by a
pdf
f(y
k
|D (k− 1), Θ), (1)
where Θ denotes a multivariate finite model parame-
ter which, underthe Bayesian treatment, is considered
to be a random variable obeying pdf
g(Θ|D (k− 1)). (2)
If this pdf is properly chosen from a class conjugate
to the model (1), the Bayes’ theorem yields a poste-
rior pdf of the same type (Bernardo and Smith, 2001).
Then, the rule for recursive incorporationof new mea-
surements into the parameter pdf reads
g(Θ|D (k)) =
f(y
k
|D (k− 1), Θ)g(Θ|D (k− 1))
I
k
,
(3)
where
I
k
=
Z
f(y
k
|D (k− 1), Θ)g(Θ|D (k− 1))dΘ (4)
= f(y
k
|D (k− 1)) (5)
is a normalizing term. It assures unity of the resulting
pdf and it is a suitable measure of model’s fit, often
called evidence. The equality of (4) and (5) follows
from the Chapman-Kolmogorov equation (Karush,
1961). Furthermore, this equation also yields the pre-
dictive pdf f (y
k+1
|D (k)) providing the Bayesian pre-
diction, formally
f(y
k+1
|D (k)) =
Z
f(y
k+1
|D (k), Θ)g(Θ|D (k))dΘ
=
I
k+1
I
k
. (6)
The last equality follows from the recursive property
of the Bayesian updating (3).
Although the described methodology is important
per se, it strongly relies on invariance of Θ. However,
this assumption is often violated in practical situations
and the evolution ofΘ must be appropriatelyreflected
by an additional time update according to model
g(Θ
k+1
|Θ
k
, D (k)). (7)
Generally, we can distinguish two significant cases:
(i) The evolution model (7) is known a priori. Then,
Θ is called the state variable and, under certain
conditions, the modelling turns into the famous
Kalman filter (Peterka, 1981).
ADAPTIVE CONTINUOUS HIERARCHICAL MODEL-BASED DECISION MAKING - For Process Modelling with
Realistic Requirements
285