for image denoising. However, through the literature
study, we find that only little work is done on how
to determine regularization parameters, and diffusion
operators for achieving optimal and high-fidelity im-
age restoration results.
In this paper, we extend the variable exponent,
linear growth functional (Chen et al., 2006), (Chen
and Rao, 2003) to double regularized Bayesian es-
timation for simultaneously deblurring and denois-
ing. The Bayesian framework provides a structured
way to include prior knowledge concerning the quan-
tities to be estimated (Freeman and Pasztor, 2000).
Different from traditional “passive” edge-preserving
methods (Geman and Reynolds, 1992), our method
is an “active” data-driven approach which integrates
self-adjusting regularization parameters and dynamic
computed gradient prior for self-adjusting the fidelity
term and multiple image diffusion operators. A new
scheme is designed to select the regularization pa-
rameters adaptively on different levels based on the
measurements of local variances. The chosen diffu-
sion operators are automatically adjusted following
the strengths of edge gradient. The suggested ap-
proach has several important effects: firstly, it shows
a theoretically and experimentally sound way of how
local diffusion operators are changed automatically
in the BV space. Secondly, the self-adjusting regu-
larization parameters also control the diffusion oper-
ators simultaneously for image restoration. Finally,
this process is relatively simple and can be easily ex-
tended for other regularization or energy optimiza-
tion approaches. The experimental results show that
the method yields encouraging results under different
kinds and amounts of noise and degradation.
The paper is organized as follows. In section 2, we
discuss the concepts of BV space, the total variation
(TV) model and its related functionals. In section 3,
we present a Bayesian estimation based adaptive vari-
ational regularization with respect to the estimation
of PSFs and images. Numerical approximation and
experimental results are shown in section 4. Conclu-
sions are summarized in section 5.
2 RELATED WORK
2.1 The Bv Space and the Tv Method
Following the total variation (TV) functional (Rudin
et al., 1992), (Chambolle and Lions, 1997), (Weickert
and Schn
¨
orr, 2001), (Chan et al., 2002), (Aubert and
Vese, 1997), we study the total variation functional in
the bounded total variation (BV) space.
Definition 2.1.1 BV(Ω) is the subspace of functions
f ∈ L
1
(Ω) where the quantity is finite,
TV( f) =
Ω
|Df|dA = (1)
sup
Ω
f · divϕdA ; ϕ ∈ C
1
c
(Ω,R
N
)
where dA = dxdy, |ϕ(A)|
L
∞
(Ω)
≤ 1, C
1
c
(Ω,R
N
) is
the space of functions in C
1
(Ω) with compact sup-
port Ω. BV(Ω) endowed with the norm k fk
BV(Ω)
=
k fk
L
1
(Ω)
+ TV( f) which is a Banach space.
While one adopts the TV measure for image regular-
ization, the posterior energy for Tikhonov Regulariza-
tion then takes the form which is also given in (Rudin
et al., 1992),
J ( f) =
λ
2
Ω
|
g− hf
|
2
dA+
Ω
|
Df
|
dA (2)
where g is the noisy image, f is an ideal image and
λ > 0 is a scaling regularization parameter. When an
image f is discontinuous, the gradient of f has to be
understood as a measure. The TV( f) functional is
often denoted by
Ω
|Df|dxdy, with the symbol D re-
ferring to the conventional differentiation ∇. One use
f ∈ L
1
(Ω) to simplify the numerical computation (see
(Giusti, 1984), for instance),
Ω
|Df|dA =
Ω
|∇f|dA.
In order to study more precisely the influence of
the smoothing term in the regularization, we need to
make an insight observation of a more general total
variation functional which can help us to understand
the convexity criteria in variational regularization. A
general bounded total variational functional can be
written in the following,
J ( f
(g,h)
) =
λ
2
Ω
(g− hf)
2
dA+
Ω
φ(|∇f(x, y)|)dA
The choice of the function φ is crucial. It determines
the smoothness of the resulting function f in the space
V = { f ∈ L
2
(Ω);∇f ∈ L
1
(Ω)} which is not reflexive.
In this variational energy function, the closeness
of the solution to the data is imposed by the penalty
term φ(·) in the energy function. If the energy func-
tions are nonconvex, it might become more compli-
cated than the convex functionals. Although some
non-convex φ(·) penalty terms can achieve edge-
preserving results, convex penalty terms can help us
to get a global convergence and decrease the complex-
ity of computation. In the following, we study φ(·) in
a more general form φ(∇f) → φ(Df) in the BV space.
2.2 Convex Linear-Growth Functional
Let Ω be an open, bounded, and connected subset
of R
N
. We use standard notations for the Sobolev