Hierarchical Modelling of Industrial System Reliability with
Probabilistic Logic
Kamil Dedecius
1
and Pavel Ettler
2
1
Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic
Pod Vod
´
arenskou v
ˇ
e
ˇ
z
´
ı 4, 182 08 Prague, Czech Republic
2
COMPUREG Plze
ˇ
n, s.r.o., N
´
adra
ˇ
zn
´
ı 18, 306 34 Plze
ˇ
n, Czech Republic
Keywords:
Bayesian Analysis, Subjective Logic, Reliability.
Abstract:
The use of Bayesian methods in dynamic assessment of system reliability is inevitably limited by compu-
tational difficulties arising from non-conjugate prior distributions. This contribution proposes an alternative
framework, based on the combination of Bayesian methods and the subjective logic. The advantage of the
former – consistent and exhaustive representation of available statistical knowledge, is extended by the latter,
allowing computationally feasible combination of this knowledge at any level of the observed system using
logic operations. The resulting methodology is currently under development in order to enlarge the capability
of an intended novel industrial hierarchical condition monitoring system.
1 INTRODUCTION
We study the topic of hierarchical evaluation of sys-
tem conditions (in terms of realiability and fault de-
tection), based on the Bayesian paradigm. Its over-
whelming influence in many areas of scientific data
analyses naturally led to its more or less extensive use
in the reliability theory. For example, (Hamada et al.,
2008) give a comprehensive survey of Bayesian meth-
ods for assessment of components and systems relia-
bility, including the techniques exploiting degradation
data, assurance testing, regression models in reliabil-
ity, Bayesian fault trees and network models.
The main difficulty with Bayesian modelling of
dynamic systems is associated with the computation
of posterior distributions (Gelman et al., 2003). If the
prior distributions expressing the knowledge of the
inferred variable of interest are not conjugate to the
data model, the posteriors may take intractable forms.
Computationally demanding approximations of these
posteriors become inevitable, but many methods (e.g.
Markov chain Monte Carlo) cannot reach a feasible
result in the available timespan. This problem is yet
accented when hierarchical models are involved.
The purpose of this contribution is twofold: first,
it proposes a method for computationally tractable
Bayesian inference of beta-distributed system relia-
bility. Second, the subjective logic (Jøsang, 2001;
Jøsang, 2008) provides, besides the gained tractability
resulting from logical operations, the means for intel-
ligible representation of reliability at any level of the
monitored system, from individual components up to
the system as a whole. Since the methods are some-
what different, we first present them concisely, giving
examples along the way.
Elaboration of the methodology is not autotelic
combination of both approaches is being exploited
within the international project aiming to develop a
novel type of condition monitoring system and to test
its achievements in the industrial environment. The
paper presents work-in-progress results, whose as-
sessment with respect to the existing methodologies
are a part of the future research.
2 BAYES AND SUBJECTIVE
LOGIC
The proposed framework consists of two method-
ologies the Bayesian modelling, already well-
established paradigm, and the subjective logic, a
very recent probabilistic logic paradigm. While the
Bayesian modelling allows theoretically consistent
and versatile approach to reliability modelling
1
at any
level of the system of interest, the subjective logic
1
We understand the reliability to coincide with the prob-
ability that the studied system works well.
133
Dedecius K. and Ettler P..
Hierarchical Modelling of Industrial System Reliability with Probabilistic Logic.
DOI: 10.5220/0005007001330139
In Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO-2014), pages 133-139
ISBN: 978-989-758-039-0
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
provides means for fast composition of conclusions
among these levels. The result gives rise a to novel
framework for system health monitoring, exploiting
the intriguing aspects of both involved theories.
2.1 Principles of Bayesian Modelling
The principles of Bayesian modelling consist in spec-
ification of a probabilistic model for observable data
y and a prior distribution for this model’s unobserv-
able parameters θ. In other words, we assume that the
data y obey some distribution with a probability den-
sity function f (y|θ), or, under existence of observable
exploratory variables, f (y|x, θ).
The prior distribution π(θ) statistically summa-
rizes all a priori available information about the in-
ferred parameter θ. It can be obtained from past mea-
surements, from an expert or alternatively has a non-
informative form. The prior pdf is updated by new in-
formation provided by x and y according to the Bayes’
rule,
π(θ|x, y) =
f (y|x, θ)π(θ)
R
f (y|x, θ)π(θ)dθ
, (1)
where the integral
q(y|x) =
Z
f (y|x, θ)π(θ)dθ =
Z
f (y, θ|x)dθ (2)
is taken over the space of θ. It serves as a normalizing
constant, assuring that the resulting posterior distribu-
tion π(θ|x, y) is proper. By careful inspecting of (2) it
is possible to notice, that q(y|x) can play even more
fundamental role than only the normalizing one. It
is also a predictive density of y given x, which is ob-
tained as an expected value E
θ
[ f (y|x, θ)], that is, over
all admissible values of θ.
The resulting posterior pdf in (1), namely
π(θ|x, y), involves both the prior information and the
contribution from the observed data.
An important thought not always applicable prop-
erty related to the Bayesian modelling is conjugacy.
A model f (y|x, θ), being chosen from a suitable class
of distributions (called the exponential family), guar-
antees the existence of the conjugate prior pdf π(θ).
This ensures that the posterior pdf π(θ|x, y) lies in
the same class of distributions as π(θ), the Bayesian
update is analytically tractable, and the posterior can
serve as the prior for a subsequent update when new
data is obtained. This salient feature is clearly prac-
tical for real time modelling of dynamical systems.
Then, denoting t = 1, 2, . . . the time index,
π(θ|x
1:t
, y
1:t
) f (y
t
|x
t
, θ)π(θ|x
1:t1
, y
1:t1
), (3)
where y
1:t
= {y
1
, . . . , y
t
} (analogously for x
1:t
) and
stands for proportionality.
More on Bayesian modelling can be found, e.g.,
in (Gelman et al., 2003); its application to dynamic
modelling is thoroughly treated in (Peterka, 1981).
Example 1. Assume the regression model y = x
0
θ + ε
where y R
n
is a regressand (dependent or response
variable), x R
n×m
is a regressor (independent ex-
planatory variable), θ R
m
denotes regression coef-
ficients and ε is a vector of independent identically
distributed noise terms from a 0-centered normal dis-
tribution with a known variance, N (0, σ
2
). Then, the
probabilistic model f (y|x, θ) for y has the form
y N (θ
|
x, σ
2
).
If the prior pdf π(θ) is normal, then the posterior pdf
π(θ|x, y) is also normal. That is, the form of the prior
pdf is preserved, prior pdf is conjugate to the model
and dynamic setting (3) is possible. The predictive
distribution with a pdf q(y|x) takes the form of a gen-
eralized Student’s t distribution.
2.1.1 Aspects of the Bayesian Approach
The Bayesian approach has many advantages, aris-
ing from its theoretical consistency and to a signifi-
cant degree balancing its frequent computational bur-
den. Some of the advantages, important for reliability
modelling, are:
Uncertainty an inherent aspect of information
is consistently involved in modelling. For instance,
the posterior distribution of θ naturally expresses our
uncertainty in terms of variance.
Asymptotics while frequentist paradigm heavily
relies on asymptotic results, the Bayesian approach
does not. It yields results with any sample size. This
is possible due to the concept of uncertainty.
Dynamic Modelling an important aspect con-
nected with conjugate priors. For instance, the
Kalman filter or autoregression have their Bayesian
interpretations. But there are many Bayesian models
without equivalent non-Bayesian counterparts. Dy-
namic modelling inevitably calls for parameter track-
ing. The Bayesian paradigm allows its consistent
treatment, e.g. (Dedecius et al., 2012).
Model Selection and Combination it is eas-
ily possible to discriminate among several candidate
models or even combine their results in order to fur-
ther improve (e.g. stabilize) the whole modelling, see,
e.g. (Raftery et al., 2010).
ICINCO2014-11thInternationalConferenceonInformaticsinControl,AutomationandRobotics
134
2.2 Principles of Subjective Logic
Subjective logic (Jøsang, 2001; Jøsang, 2008) is a
novel probabilistic logic theory for treatment of un-
certain propositions. Like other logic theories, it pro-
vides operations with these propositions, for instance
logical negation (NOT), conjunction (AND), disjunc-
tion (OR), implications, modi ponens and tolens and
many others, some of which do not have their coun-
terparts in other logics, e.g. discounting.
Subjective opinions, binomial or Dirichlet, ex-
press beliefs about proposition under uncertainty.
Since we are working with dichotomous variable
failure absent/present we focus on binomial opin-
ions. These opinions are represented by a quadruple
ω = (b, d, u, a),
where b denotes the mass of belief in support of the
variable of interest being true, d disbelief with the op-
posite meaning, u uncertainty, the mass complement-
ing belief and disbelief, and finally a stands for the
base rate, which is, to some degree, similar to prior
information in Bayesian inference. The terms satisfy
the following conditions:
b, d, u, a [0, 1] and b + d + u = 1.
Opinions in subjective logic have a straight connec-
tion with binary logic and probability. For instance
b = 1 – equivalent to binary logic TRUE;
d = 1 – equivalent to binary logic FALSE;
u = 0 – equivalent to traditional probability.
The mean value of a binary proposition about X is
simply the mass of belief plus a proportion of uncer-
tainty assigned by the base rate,
E[X] = b + au. (4)
Obviously, binomial opinions live in a convex hull,
concretely 2-simplex, an equilateral triangle with
edges of unit norm.
Although the representation of the binomial opin-
ions in terms of ω = (b, d, u, a) is intuitive, we will
prefer a more convenient form of a beta distribution
B(α, β) with parameters α, β > 0. The beta distribu-
tion has a pdf of the form
f (p|α, β) p
α1
(1 p)
β1
, p [0, 1], (5)
where p denotes probability and is proportionality.
The beta distribution of a binomial opinion maps the
parameters as α = r +Wa and β = s +Wa, where W
is a prior weight, usually set equal to 2, guaranteeing
the uniform pdf with a = 0.5 and r, s = 0. The equiv-
alent of (4) is the mean value of the beta-distributed
variable,
E[p] =
α
α + β
=
r +Wa
r + s +W
.
Bijective mapping between the binomial represen-
tation and the opinion has the following form:
b =
r
Ξ
, d =
s
Ξ
, u =
W
Ξ
l
r =
W b
u
, s =
W d
u
where Ξ = W + r + s. We omit the trivial case u =
0 and consider standard W = 2, preserving uniform
distribution with b,d = 0.
Subjective Logic Conjunction and Disjunction
(AND/OR). Given two independent opinions ω
x
=
(b
x
, d
x
, u
x
, a
x
) and ω
y
= (b
y
, d
y
, u
y
, a
y
), the conjunc-
tion (AND) binomial opinion ω
xy
is given by
b
xy
= b
x
b
y
+
(1 a
x
)a
y
b
x
u
y
+ (1 a
y
)a
x
b
y
u
x
1 a
x
a
y
d
xy
= d
x
+ d
y
d
x
d
y
u
xy
= u
x
u
y
+
(1 a
y
)b
x
u
y
+ (1 a
x
)b
y
u
x
1 a
x
a
y
a
xy
= a
x
a
y
.
while the logical disjunction (OR) is given by
b
xy
= b
x
+ b
y
b
x
b
y
d
xy
= d
x
d
y
+
(1 a
y
)a
x
d
x
u
y
+ (1 a
x
)a
y
d
y
u
x
a
x
+ a
y
a
x
a
y
u
xy
= u
x
u
y
+
a
y
u
y
d
x
+ a
x
u
x
d
y
a
x
+ a
y
a
x
a
y
a
xy
= a
x
+ a
y
a
x
a
y
.
The proofs can be found in (Jøsang and McAnally,
2005). For other logical operations refer, e.g.,
(Jøsang, 2008).
Example 2. Suppose that we monitor a real system
using the active probing technique, consisting in peri-
odic sending short request messages and counting the
number of received responses. A typical example is
the Internet Control Message Protocol (ICMP) Echo
Request, commonly known as ping, which should be
answered by ICMP Echo Reply. Simple counting of
the proportion of received and all requests is an ex-
ample of application of the beta distribution. An il-
lustration is depicted in Fig. 1.
Example 3. Let us have a system consisting of three
blocks – A, B and C. Let A and B be interchangeable
in the sense that it is satisfactory if at least one of
them works well and let C be a critical system (Fig.
2). An example of such setting is a redundant disk
array where A and B are two mirrored hard drives
HierarchicalModellingofIndustrialSystemReliabilitywithProbabilisticLogic
135
Figure 1: ICMP active probing example: r + s = 30 re-
quests, r = 25 received replies, base rate a = 0.5 and stan-
dard W = 2. Left: beta pdf with depicted mean value. Right:
proportions of belief (green), disbelief (red) and uncertainty
(grey).
B
A
C
Figure 2: Example of a three-block network with redun-
dancy.
and C is another hard drive. We are interested in the
condition monitoring of the whole setting, that is, in
ω
= (ω
A
ω
B
) ω
C
(6)
For example, if
ω
A
= (0.95, 0.02, 0.03, 0.5)
ω
B
= (0.3, 0.6, 0.1, 0.5)
ω
C
= (0.9, 0.05, 0.05, 0.5),
we obtain ω
= (0.89, 0.07, 0.05, 0.38). This is also
graphically depicted in Fig. 3 for the block A–B and
in Fig. 4 for the whole system A–B–C.
Figure 3: Subsystem A–B. The resulting ω
A
ω
B
=
(0.97, 0.02, 0.02, 0.75) with (b, d, u) depicted in green, red
and grey, respectively.
3 FUSION FRAMEWORK
Fusion of the theories of Bayesian modelling and sub-
jective logic is relatively straightforward, due to the
Figure 4: System A–B–C. The resulting (ω
A
ω
B
) ω
C
=
(0.89, 0.07, 0.04, 0.38) with (b, d, u) depicted in green, red
and grey, respectively.
beta representation of binomial opinions. Following
the aforementioned principles of the Bayesian update,
equations (1) or (3), we only need to know the in-
formation generating model. A natural choice is the
generalized binomial distribution, since the beta dis-
tribution is conjugate to it and the resulting posterior
will be beta as well. Remind that this is advantageous
in dynamic settings.
The generalized binomial distribution has a pdf of
the form
f (y|p) p
r
(1 p)
s
, (7)
where p [0, 1] is a parameter (probability) and r, s >
0 are statistics, e.g. r successes and s failures in r + s
trials. If r and s are positive integers, then one talks
about the usual binomial distribution Binom(r + s, r).
Indeed, the normalizing term then does not take the
form of the commonly known binomial coefficient in
the generalized form. Clearly, (7) is conjugate to (5),
the quantities r, s coincide and Wa in (5) is absorbed
to the normalizing term.
The Bayesian update with newly obtained r
t
and
s
t
takes the form
π(p|r
1:t
, s
1:t
) f (r
t
, s
t
|p)π(p|r
1:t1
, s
1:t1
). (8)
That is, the posterior pdf is fully characterized by
statistics
r
1:t
= r
1:t1
+ r
t
(9)
s
1:t
= s
1:t1
+ s
t
. (10)
3.1 Accounting for Variability
The ordinary Bayes’ rule (1) assumes invariability
of the inferred parameters. A direct exploitation of
the prior and model for varying parameters inevitably
spoils the inference. A simple way around this issue is
the use of forgetting. A thorough overview of the for-
getting theory state of the art can be found in (Dede-
cius et al., 2012). Here, we propose to use the most
simple approach the exponential forgetting, intro-
duced in the Bayesian setting in (Peterka, 1981). This
ICINCO2014-11thInternationalConferenceonInformaticsinControl,AutomationandRobotics
136
means flattening of the prior pdf before new data is
incorporated into it,
π(p|r
1:t1
, s
1:t1
) π(p|r
1:t1
, s
1:t1
)
λ
, (11)
where λ (0, 1) is the forgetting factor, usually not
lower than 0.95. Obviously,
r
1:t1
λr
1:t1
s
1:t1
λs
1:t1
Some selected pairs of values of λ and the number
d of effective samples are depicted in Table 1.
Table 1: Number d of effective samples given forgetting
factor λ.
λ 0.999 0.998 0.995 0.99 0.98 0.95
d 1000 500 200 100 50 20
3.2 Sources of Opinion Information
The model (7) can be viewed in the scope of the clas-
sification theory: the information, generated by the
underlaying process, belongs either to the class “suc-
cess” (good operating conditions) or to the class “fail-
ure”. Furthermore, this classification can be hard (the
information pertains to only one class at a time), or
it can be soft (the information may belong to both
classes to some extent). The former case is typical for
strictly dichotomous information (the system is either
working or not). The latter applies when there is some
uncertainty present (it works but not perfectly).
There are several possible sources of information
(data) for update of opinions. Below, we list our ini-
tial results. However, this topic still deserves consid-
erable amount of research effort.
3.3 Hard Classification
Strictly binomial data naturally arise in many situa-
tions, where only success or failure are observed, e.g.
monitored subsystem working/not working. A typical
example is the basic active probing technique illus-
trated above. Then, the Bayesian update (3) simply
incorporates direct counts of successes r
t
Z
+
and
failures s
t
Z
+
into the prior distribution via equa-
tions (10).
Often, it is possible to hard-classify with respect
to some preset threshold. In Section 4.2, we give
an illustration of this. The measurements of a noise-
corrupted signal are classified based on the signal-to-
noise ratio (SNR) resulting either from good or bad
(system) conditions. This dichotomous classification
exploits a criterion set by an expert.
3.4 Soft Classification
Here the situation is much more complicated, as we
deal with the need of extraction of information in fa-
vor of both classes. In the scope of the Bayesian anal-
ysis, we propose to exploit the predictive pdf q(y|x),
equation (2). It is a natural source of information,
measuring the fit of the data y with respect to the a
posteriori available information about the model and
its parameters. For assigning the data to the classes,
we can compare their localisation with respect to a
suitable statistics, e.g. the mean value, median or
other. A popular approach independent of our frame-
work is to divide the support of q(y|x) into interval by
multiplies of the standard deviation σ starting from
the mean µ. For instance, the 1 3σ intervals are
given by µ ± σ, µ ± 2σ, µ ± 3σ, minus the mutual in-
tersections. These intervals are assigned probabilities,
inherited as r
t
and s
t
= 1 r
t
for the Bayesian update
(3). The probabilities can be obtained as the propor-
tion of data within each interval. Roughtly, given by
the Chebyshev’s inequality
Pr(|Y µ| cσ) c
2
, c R
+
.
Precisely, knowledge of the functional form of the pdf
allows determination of exact proportions from the
distribution function Q(x),
Pr(|Y µ| cσ) =
Z
µ+cσ
µcσ
q(y|x)dy
= Q(µ + cσ) Q(µ cσ).
Example 4. Let us consider specification of (r
t
, s
t
)
for the Bayesian update of the beta distribution ac-
cording to (8). If the assignment is based on the as-
sumption of normality of y|x N (µ, σ
2
), then r
t
=
1 s
t
equivalent to the relative amount of data within
intervals is
r
t
= 0.68 for y (µ σ, µ + σ);
r
t
= 0.27 for y (µ 2σ, µ + 2σ) \ (µ σ, µ + σ);
r
t
= 0.04 for y (µ 3σ, µ + 3σ) \ (µ 2σ, µ + 2σ);
r
t
= 0.01 elsewhere.
The above-given values equivalently come from the
3σ
2
rule.
4 EXAMPLES
Two examples are given below. The first one depicts
the evolution of the binomial opinion updated by sev-
eral data. The latter considers modelling with real
data.
HierarchicalModellingofIndustrialSystemReliabilitywithProbabilisticLogic
137
4.1 Bayesian Update
This example considers the Bayesian update of the bi-
nomial opinion represented by the beta pdf. It is ini-
tialized with (b, d, u, a) = (0.4, 0.3, 0.3, 0.5). Each of
the six subsequent updates is based on 10 measure-
ments with 5, 2, 5, 7, 9 and 9 successes. The scheme
corresponds to hard classification (Section 3.3) with
binomial data, typical, e.g., for the active probing
monitoring.
The evolution of the beta pdf and the opinions is
depicted in Figure 5 (green is belief, red disbelief and
grey uncertainty). Consistently with our expectation,
the gradually increasing kurtosis of the beta pdf is
connected with the diminishing opinion uncertainty.
Figure 5: Bayesian update: Evolution of the beta pdf and
opinion during 6 updates.
4.2 Example from Metal Processing
Industry
This example relates to the operating conditions of a
cold rolling mill. The output thickness of a processed
metal strip is one of the key measures of the product
quality. Its deviation h
2
from the nominal value is es-
sential for the thickness control. If h
2
is not measured
correctly, be it due to the sensor failure or problems
with the rolled strip, the control may potentially re-
sult in deterioration of the final product quality.
Figure 6 shows an example of a spurious thick-
ness measurement, affected by a dirt on the strip sur-
face. The dirt implies the superposition of an additive
noise due to the jitter of sensor’s measuring tips (top,
in blue). Assuming normality of the additive noise,
the signal (h
2
) can be filtered using a normal first or-
der autoregressive model AR(1) of the form
h
2,t
N

h
2,t1
1
|
β
0,t
β
1,t
, σ
2
t
, t = 1, 2, . . .
where the scalars β
0,t
and β
1,t
are the regression co-
efficients (the slope and intercept) and σ
2
t
denotes the
zero-mean noise variance. The parameters β
0,t
, β
1,t
and σ
2,t
are estimated in the Bayesian framework with
the normal inverse-gamma prior pdf in the Peterka’s
form N iG(V
t1
, ν
t1
) with a symmetric positive def-
inite information matrix V
t1
R
3×3
and the scalar
degrees of freedom ν
t1
R
+
, see (Peterka, 1981).
Their update (1) with exponential forgetting reads
V
t
= λV
t1
+
h
2,t
h
2,t1
1
h
2,t
h
2,t1
1
|
ν
t
= λν
t1
+ 1.
The estimators of β
0
, β
1
and σ
2
are given by
"
b
β
0,t
b
β
1,t
#
= V
1
t,[2:3,2:3]
V
t,[2:3,1]
(12)
and
b
σ
2
t
=
1
ν
V
t,[1,1]
"
b
β
0,t
b
β
1,t
#
|
V
t,[2:3,1]
!
, (13)
where the indices of the type [i : j, k] denote blocks
on rows i : j and column k. Thorough inspection of
(12) and (13) reveals ordinary least squares estima-
tors with the Tichonoff-type regularization, that is, the
least-squares estimators in the Bayesian sense.
We exploit the hard classification approach based
on the signal-to-noise ratio (SNR). The threshold
value between normal and abnormal state is 10dB (de-
termined by an expert). The value of SNR is directly
used to update the beta distribution.
The prior pdf was initialized with V
0
with diagonal
[0.1, 0.01, 0.01] and zeros elsewhere and ν
0
= 3 cor-
responding to a flat distribution. The variability of pa-
rameters and opinions is driven by the forgetting fac-
tors, set to 0.97 for the estimation of the autoregres-
sive model and 0.95 for the beta updates, (11). Since
these factors influence the reaction time, a method for
their adaptive tuning would further improve the prop-
erties of the modelling and condition evaluation.
The results from the modelling and determination
of h
2
conditions are depicted in Figure 6 as well as the
original data. The first subfigure shows the evolution
of h
2
(blue) and the filtered signal (red). Obviously,
there exist two segments with significant superposed
noise. The evolution of the estimates follows. The
SNR values is in accordance with the expected behav-
ior with the exception of the first circa 400 steps. We
stress, that most of this period is a stabilization, i.e.,
transition from a very flat prior (i.e., high variance,
low information) pdf to an informative one. The last
subfigure depicts the evolution of the tripple (b, d, u)
of the binomial opinion. Again, the beginning is very
skeptical due to the flat prior. We see, that (up to the
ICINCO2014-11thInternationalConferenceonInformaticsinControl,AutomationandRobotics
138
stabilization), the framework quite promptly reacted
to the worsening of the signal quality. The loss of in-
formation from the noisy signal becomes particularly
evident after t 1500, where even the autoregressive
model looses its filtering ability. Correspondingly,
d 1.
Sometimes, this approach may be too stringent.
Then, the update may be based on other (e.g. trans-
formed) information. Also, the responsiveness may
be tuned by the forgetting factor.
We remind, that the subjective logic opinion is in-
tended to enter the logical operations for further ana-
lysis of the whole system.
Figure 6: Evolution of condition of the output strip thick-
ness deviation h
2
. From top: strip thickness deviation h
2
(blue) and its filtered value
b
h
2
(red); evolution of the regres-
sion coefficients estimates
ˆ
β
0
and
ˆ
β
1
; evolution of the noise
variance estimate
b
σ
2
; evolution of SNR with respect to the
10dB criterion; evolution of opinion (in the same colors as
above).
5 CONCLUSION
The proposed novel framework, combining the
Bayesian paradigm for information processing and
the subjective logic for its combination and represen-
tation, provides intriguing methods for hierarchical
modelling of a system reliability. While not dictat-
ing the particular form of combination, it allows to
exploit the best of both theories where and when nec-
essary. The approach is under its initial development
and a lot needs to be done. First, the Bayesian up-
date of the opinion beta pdf is fine up to the need of
achieving binomial information from not necessarily
binomial data. This point deserves a lot of focus yet.
ACKNOWLEDGEMENTS
The research project is supported by the grant M
ˇ
SMT
7D12004 (E!7262 ProDisMon).
REFERENCES
Dedecius, K., Nagy, I., and K
´
arn
´
y, M. (2012). Parameter
tracking with partial forgetting method. International
Journal of Adaptive Control and Signal Processing,
26(1):1–12.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B.
(2003). Bayesian Data Analysis, Second Edition.
Chapman & Hall/CRC.
Hamada, M. S., Wilson, A. G., Reese, C. S., and Martz,
H. F. (2008). Bayesian Reliability.
Jøsang, A. (2001). A Logic for Uncertain Probabilities. Int.
J. Unc. Fuzz. Knowl. Based Syst., 09(03):279–311.
Jøsang, A. (2008). Conditional Reasoning with Subjec-
tive Logic. Journal of Multiple-Valued Logic and Soft
Computing, 15(1):5–38.
Jøsang, A. and McAnally, D. (2005). Multiplication and
comultiplication of beliefs. International Journal of
Approximate Reasoning, 38(1):19–51.
Peterka, V. (1981). Bayesian approach to system identifica-
tion In P. Eykhoff (Ed.) Trends and Progress in System
Identification, Oxford, U.K.: Pergamon Press, 239–
304.
Raftery, A., K
´
arn
´
y, M., and Ettler, P. (2010). Online Pre-
diction Under Model Uncertainty via Dynamic Model
Averaging: Application to a Cold Rolling Mill. Tech-
nometrics, 52(1):52–66.
HierarchicalModellingofIndustrialSystemReliabilitywithProbabilisticLogic
139