derivative suddenly goes down and it is quite unpre-
dictable.
So here in this modelling, we have only one para-
meter to estimate, but we have had another difficulty
with the introduction of the exponential.
In the next step, we try to avoidthe difficultyof the
calculation of the derivativeof the φ
2
function, since it
is our main problem in this section. So next, we con-
sider a simple Metropolis scheme for the calculation
of the Kernel, which does not involve the calculation
of this derivative.
3.2 A Metropolis Scheme for the Kernel
Estimation
3.2.1 Algorithm
Here we propose to test a Metropolis scheme for
the kernel so that we skip the problem of the deriv-
ative of the φ
2
function. We may assume that the
minimization of the energy w.r.t K is faster than the
optimization w.r.t X. So considering this fact, we
may assume that here a Metropolis scheme for Kcan
give good results and should not make the algorithm
lost its advantage of speed because we only have one
variable to estimate and besides the search space for
K is not that large.
The algorithm is the following:
For each iteration IT :
• We randomly modify the configuration of the cur-
rent kernel K to obtain a new state K
′
belonging
to our search space.
• We calculate the energy associated to this new
state, which is H(X, K
′
)
• We compare H(X, K
′
) and H(X, K) : if
H(X, K
′
) < H(X, K) then our new current kernel
is K
′
else if H(X, K
′
) > H(X, K), then we use a
Boltzmann acceptance criteria to decide whether
we accept this new state K
′
or not. The accep-
tance probability depends on the temperature T of
the system : p = e
H(X,K)−H(X,K
′
)
T
• We finally decrease the temperature T of the sys-
tem
Here we propose to test the following scheme:
1. Langevin scheme for X
2. Metropolis scheme for K and the Gaussian form
for K defined previously
3.2.2 Results
Herein, we use the following Gaussian
form of section 3.3.1 for the kernel K :
K
i, j
=
1
Z
· e
−k·((i−c)
2
+( j−c)
2
)
Our search space for k is the interval [0, 4] with a
precision of 2 decimals.
So the following simulations have been done:
The data is the blurred image without noise. We start
the simulation with pure noise. Then :
• PhaseI : pre-treatment . We run 300 < N < 1000
iterations for the X scheme
• PhaseII : we run n times the following cycle :
1. m = 1500 iterations for K
2. p = 200 iterations for the X scheme
The initialization of the kernel is very important.
We cannot start with the identity kernel because this
is a trivial solution of our optimization problem for
the energy : φ
2
(X, K,Y) =
∑
i
(H ∗ X(i) −Y
i
)
2
Y is the data image and X is the current image.
If we initialize with the identity kernel, then after
the pre-treatment (phaseI), the current image X, is
denoised, but blurred. So at the beginning of the
phaseII, X is close to the data Y with less noise. But
the important fact is that the blur in these two images
is the same. So that optimizing the energy w.r.t. k has
the identity kernel as a trivial solution since the above
sum can be minimized with it.
How initialize the convolution Kernel knowing that
there is a stability with the identity Kernel ?
In practice, we have initialized it very close from
the identity. By that way, the small difference to the
identity Kernel let us avoid the previous problem.
And also the proximity to the identity does not
degradate the image as a strong convolution Kernel
would have done.
We finally obtained interesting results with this al-
gorithm. Results on the synthetic image with the same
parameters of noise and blur as for the tests in section
3.1 are shown on figure 5.
The edges are recovered well. The result is as good
as the image obtained in section 3.1 with the known
kernel.
The result image is satisfying. We have recovered
the edges without introducing artifacts. The result is
as good as the one we obtained in section 3.1 (known
kernel).
IMAGE DECONVOLUTION USING A STOCHASTIC DIFFERENTIAL EQUATION APPROACH
163