sults are obtained with real clinical images.
ACKNOWLEDGEMENTS
This work has been partially supported by FIS
projects PI04/0857, PI05/1953 and by the Greece-
Spain Integrated Action HG2004-0014.
REFERENCES
Andrieu, C., de Freitras, N., Doucet, A., and Jordan, M.
(2003). An introduction to MCMC for machine learn-
ing. Machine Learning, 50:5–43.
Beal, M. (2003). Variational algorithms for approximate
Bayesian inference. PhD thesis, The Gatsby Compu-
tational Neuroscience Unit, University College Lon-
don.
Bishop, C. and Tipping, M. (2000). Variational relevance
vector machine. In Proceedings of the 16th Confer-
ence on Uncertainty in Articial Intelligence, pages
46–53. Morgan Kaufmann Publishers.
Galatsanos, N. P., Mesarovic, V. Z., Molina, R., and Kat-
saggelos, A. K. (2000). Hierarchical bayesian image
restoration for partially-known blur. IEEE Trans Im-
age Process, 9(10):1784–1797.
Galatsanos, N. P., Mesarovic, V. Z., Molina, R., Katsagge-
los, A. K., and Mateos, J. (2002). Hyperparameter es-
timation in image restoration problems with partially-
known blurs. Optical Eng., 41(8):1845–1854.
Hsiao, I.-T., Rangarajan, A., and Gini, G. (1998). Joint-map
reconstruction/segmentation for transmission tomog-
raphy using mixture-models as priors. In Proceed-
ings of EEE Nuclear Science Symposium and Medical
Imaging Conference, volume II, pages 1689–1693.
Hsiao, I.-T., Rangarajan, A., and Gini, G. (2002). Joint-map
Bayesian tomographic reconstruction with a gamma-
mixture prior. IEEE Trans Image Process, 11:1466–
1477.
Jordan, M. I., Ghahramani, Z., Jaakola, T. S., and Saul,
L. K. (1998). An introduction to variational meth-
ods for graphical models. In Learning in Graphical
Models, pages 105–162. MIT Press.
Kullback, S. (1959). Information Theory and Statistics.
New York, Dover Publications.
Kullback, S. and Leibler, R. A. (1951). On information and
sufficiency. Annals of Mathematical Statistics, 22:79–
86.
Lange, K., Bahn, M., and Little, R. (1987). A theoreti-
cal study of some maximum likelihood algorithms for
emission and transmission tomography. IEEE Trans
Med Imag, 6:106–114.
Likas, A. C. and Galatsanos, N. P. (2004). A variational ap-
proach for Bayesian blind image deconvolution. ieee
j sp, 52(8):2222–2233.
L´opez, A., Molina, R., Katsaggelos, A. K., Rodriguez, A.,
L´opez, J. M., and Llamas, J. M. (2004). Parameter es-
timation in bayesian reconstruction of SPECT images:
An aid in nuclear medicine diagnosis. Int J Imaging
Syst Technol, 14:21–27.
L´opez, A., Molina, R., Mateos, J., and Katsaggelos, A. K.
(2002). SPECT image reconstruction using com-
pound prior models. Int J Pattern Recognit Artif Intell,
16:317–330.
Miskin, J. (2000). Ensemble Learning for Indepen-
dent Component Analysis. PhD thesis, Astrophysics
Group, University of Cambridge.
Miskin, J. W. and MacKay, D. J. C. (2000). Ensemble learn-
ing for blind image separation and deconvolution. In
Girolami, M., editor, Advances in Independent Com-
ponent Analysis. Springer-Verlag.
Mohammad-Djafari, A. (1995). A full bayesian approach
for inverse problems. In in Maximum Entropy and
Bayesian Methods, Kluwer Academic Publishers, K.
Hanson and R.N. Silver eds. (MaxEnt95).
Mohammad-Djafari, A. (1996). Joint estimation of para-
meters and hyperparameters in a bayesian approach
of solving inverse problems. In Proceedings of the
International Conference on Image Processing, pages
473–477.
Molina, R., Katsaggelos, A. K., and Mateos, J. (1999).
Bayesian and regularization methods for hyperpara-
meter estimation in image restoration. IEEE Trans
Image Process, 8(2):231–246.
Molina, R., Mateos, J., and Katsaggelos, A. K. (2006).
Blind deconvolution using a variational approach to
parameter, image, and blur estimation. IEEE Trans
Image Process, 15(12):3715–3727.
Wilks, S. S. (1962). Mathematical Statistics. John Wiley
and Sons.
APPENDIX
In algorithm 3 we need to calculate the quantities
E[x
j
]
q
k
(x)
, E[log(π
c
p
G
(x
j
| β
c
,α
c
))]
q
k+1
(x),q
k+1
(β),q
k
(π)
,
E[1/β
c
]
q
k+1
(β)
and E[logA
i, j
x
j
]
q
k+1
(x)
.
To calculate E[x
j
]
q
k
(x)
we note that (see Eq. 8)
E[x
j
]
p
G
(x
j
|u
j
,v
j
)
= v
j
(61)
To calculate E[1/β
c
]
q
k+1
(β)
we observe with the use of
Eq. 13 that
E
1
β
c
q(β
c
|r
c
,s
c
)
=
=
β
c
((r
c
− 1)s
c
)
r
c
Γ(r
c
)
β
−(r
c
+1)−1
c
e
−(r
c
−1)s
c
/β
c
dβ
c
=
((r
c
−1)s
c
)
r
c
Γ(r
c
)
Γ(r
c
+ 1)
((r
c
−1)s
c
)
r
c
+1
=
r
c
(r
c
−1)s
c
(62)
VISAPP 2007 - International Conference on Computer Vision Theory and Applications
172