quence is shown with the dotted line). To judge the
dependency of the denoising quality on the amount of
movements in the sequence also the mean motion en-
ergy E
t
= 1/P
∑
p
||c
t
p
||
2
is plotted. With increasing
motion energy the denoising quality decreases. Fig-
ure 7 shows another denoising example with a detail
for better visual inspection of the results. Although,
the variance of the noise varies between 14 − 16 and
the motion energy increases, the denoising quality
follows quite stable.
We would like to mention that if the number of
intra-time iterations n ≥ 1 is increased it is likely
that the MRF- result could surpass the accuracy of
the DBN but with the disadvantage of increasing the
computing time n-times. Further on, more temporal
neighbors could be used in the MRF, a choice that is
also likely to improve the quality of the MRF-result
but again leads to additional computing time.
7 SUMMARY AND
CONCLUSIONS
We introduce a special 3D DBN topology with an effi-
cient class of transition probabilities as a basic frame-
work for low level vision applications suited for ac-
tive vision systems. It provides promising results in
terms of memory amount, computational costs, and
robustness. Applications for image denoising show
that for static scenes with static noise the proposed
approximate BP achieves similar or better accuracy
for denoising than standard BP in 2D MRFs. For
dynamic scenes an efficient spatiotemporal node con-
nection for a DBN topology is introduced that allows
for fast BP with less memory load than standard 3D
MRF approaches and more accurate denoising results
on noisy real world image sequences.
REFERENCES
Bishop, C. (2006). Pattern Recognition and Machine
Learning. Springer Science+Business Media.
Brand,M. (1997). Coupled hidden markov models for com-
plex action recognition. In Proc. of IEEE Conf. on
CVPR.
Chen, J. and Tang, C. (2007). Spatio-temporal markov ran-
dom field for video denoising. In Proc. of IEEE Conf.
on CVPR.
Felzenszwalb, P. and Huttenlocher, D. (2006). Efficient be-
lief propagation for early vision. Int. J. Comput. Vi-
sion, 70:4154.
Huang, R., Pavlovic, V., and Metaxas, D. (2008). A new
spatio-temporal mrf framework for video-based ob-
ject segmentation. In Proc. of 1st Int. Workshop on
Machine Learning for Vision-based Motion Analysis.
Komodakis, N. and Paragios, N. (2009). Pairwise energies:
Efficient optimization for higher-order mrfs. In Proc.
of IEEE Conf. on CVPR.
Larsen, E., Mordohai, P., Pollefeys, M., and Fuchs, H.
(2007). Temporally consistent reconstruction from
multiple video streams using enhanced belief propa-
gation. In Proc. of IEEE Conf. on CVPR.
Murphy, K. and Weiss, Y. (2001). The factored frontier
algorithm for approximate inference in dbns. In Proc.
of 17th Conf. on Uncertainty in Artificial Intelligence.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Sys-
tems: Networks of Plausible Inference. Morgan Kauf-
mann.
Petersen, K., Fehr, J., and Burkhardt, H. (2008). Fast gener-
alized belief propagation formap estimation on 2d and
3d grid-like markov random fields. In Proc. of 30th
Conf. of the German Association for Pattern Recogni-
tion.
Ramalingam, S., Kohli, P., Alahari, K., and Torr, P. (2008).
Exact inference in multi-label crfs with higher order
cliques. In Proc. of IEEE Conf. on CVPR.
Roth, S. (2007). High-Order Markov Random Fields for
Low-Level Vision. PhD thesis, Brown University.
Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kol-
mogorov, V., Agarwala, A., Tappen, M., and Rother,
C. (2008). A comparative study of energy min-
imization methods for markov random fields with
smoothness-based priors. IEEE Trans. on PAMI,
30:10681080.
Tappen, M. and Freeman, W. (2003). Comparison of graph
cuts with belief propagation for stereo, using identical
mrf parameters. In Proc. of IEEE ICCV.
Williams, O., Isard, M., and MacCormick, J. (2005). Es-
timating disparity and occlusions in stereo video se-
quences. In Proc. of IEEE Conf. on CVPR.
Yin, Z. and Collins, R. (2007). Belief propagation in a 3d
spatio-temporal mrf for moving object detection. In
Proc. of IEEE Conf. on CVPR.
VISAPP 2010 - International Conference on Computer Vision Theory and Applications
124