beliefs 𝜇
1
)
and 𝜇
1
)
that is the belief that the
target is in the box 2 is greater than the probability
that it is there.
The further iterations of the processes
demonstrate that in the MP the relation p
t
)
<p
t
)
remains until reaching the steady state p
t
)
=
p
t
)
=0.5. In the μP, in contrast, each iteration
changes the relation between 𝜇
𝑡
)
and 𝜇
𝑡
)
such
that 𝜇
0
)
<𝜇
0
)
, 𝜇
1
)
>𝜇
1
)
, 𝜇
2
)
<𝜇
2
)
,
𝜇
3
)
>𝜇
3
)
, … up to reaching the steady state
𝜇
𝑡
)
=𝜇
𝑡
)
=0.5.
Thus, in the MP the searcher should all times
check box 2, while in the μP the searcher should
change the checked box at each step.
Now assume that transition matrices (transition
beliefs and transition probabilities) are
𝜔=𝜌=
0.4 0.6
0.3 0.7
. (41)
Then the picture essentially changes and
𝑝
1
)
=
0.38,0.62
)
,
(42)
𝜇
1
)
=
0.66,0.34
)
.
Here location probability 𝑝
1
)
=0.38 for the first
box is again smaller than the location probability
𝑝
1
)
=0.62 for the second box, but the belief
𝜇
1
)
=0.66 that the target will be in the first box is
greater that the belief 𝜇
1
)
=0.34 that it will be in
the second box. Consequently, in the first case the
searcher should check box 2, but in the second case –
box 1.
In the further iterations both relations 𝑝
𝑡
)
<
𝑝
𝑡
)
and 𝜇
𝑡
)
>𝜇
𝑡
)
remain until reaching the
steady states 𝑝
𝑡
)
=
, 𝑝
𝑡
)
=
and 𝜇
𝑡
)
=
𝜇
𝑡
)
=0.5. This state in the MP prescribes to
continue checking box 2 and in the μP it prescribes to
choose the box by random.
In addition notice that in both cases of transition
matrices in the MP the steady state is reached faster
than in the μP, thus the μP provides more information
for making decision about the box for check.
It is clear that the considered model is the simplest
one and is used only as an example. However, even
such simple model stresses the difference between the
MP and μP and demonstrates that the decisions made
in μP can differ from the decisions made in MP.
More complex processes and decisions are
obtained by the use of non-distributive version of the
algebra 𝒜, where in the aggregators ⊕
and ⊗
the
elements 𝜃 and 𝜗 differ or even differ generation
functions 𝑢 and 𝑣; but these issues we remain for
further research.
6 CONCLUSIONS
The suggested subjective Markov process (μP) goes
in parallel to the usual Markov process (MP), but, in
contrast to MP, it acts in the recently constructed
algebra 𝒜 that implements uninorm and absorbing
norm aggregators and combines logical and
arithmetical operations.
The values, with which μP deals, are considered
as observer’s beliefs about the system’s states and can
be associated with the grades of membership or with
possibilities of the system to be in certain states. Such
definition allows to use the suggested μP instead or in
parallel to the MP for analysis of the systems that
include rare events or follow subjective irrational
decisions.
For the suggested process, we considered the
basic types of the states with respect to the transition
beliefs that specify the possibilities of transitions
among the states. The essential role in this
consideration play recently introduced concepts of
subjective false and subjective true that allow precise
and meaningful classification of the states.
The difference between the suggested μP and
usual MP is illustrated by running example of the
Pollock model of search. It was shown that even in
such simple model (with maximization of the
probabilities of finding the target) μP provides
additional information and leads to the decisions that
can differ from the decisions prescribed by MP.
REFERENCES
Batyrshin I, Kaynak O, Rudas I., 2002. Fuzzy modeling
based on generalized conjunction operations. IEEE
Trans. Fuzzy Syst. 10(5), 678-683.
Bellman R, Giertz M., 1973. On the analytic formalism of
the theory of fuzzy sets. Inform. Sci., 5, 149-156.
Dubois D, Prade H. 1985. A Review of Fuzzy Set
Aggregation Connectives. Inform. Sci., 36, 85-121.
Dubois D, Prade H. 1988. Possibility Theory. Plenum, New
York.
Feller W. 1970. An Introduction to Probability Theory and
its Applications. 3rd edition. Vol I., John Willey &
Sons, New York, etc.
Fine S, Singer Y, Tishbi N. 1998. The hierarchical hidden
Markov model: analysis and applications. Machine
Learning, 32, 41-62.
Fodor J, Rudas I, Bede B. 2004. Uninorms and absorbing
norms with applications to image processing. In Proc.
SISY.
Fodor J, Yager R, Rybalov A. 1997. Structure of uninorms.
Int. J. Uncert., Fuzziness and Knowl. Based Syst., 5,
411-427.