Left input image!
Right input image!
Disparity!
Stereo
Matching
Disparity Map!
All-in-focus Image!
Window Function!
Scaling of Blur Kernel
Wiener
Deconvolution
Integration!
(weighted sum)
Aperture
Pattern!
0
20
40
60
80
100
120
140
0 50 100 150 200
kernel size [pixel]
disparity [pixel]
Conversion of Disparity
to Blur Kernel Size
Coded
Apertures
Lens
Image
Sensors
Object
Figure 1: Process flow of proposed method.
and the clear scene is stably recovered using the pre-
cisely determined PSF.
Related work is briefly summarized in Section 2.
An overview of the proposed system and the algo-
rithms is presented in Section 3, together with a proof
of the linear relationship between blur size and dis-
parity. Experiments are discussed in Section 4, while
Section 5 concludes the paper.
2 RELATED WORK
Optical blur (defocus) appearing in images captured
with conventional cameras is modeled as the convo-
lution of the sharp original scene and a blur kernel
(i.e., the PSF). Thus, the latent image can be recon-
structed by applying an inverse filter or deconvolution
techniques to the captured image. Several methods
including Richardson-Lucy deconvolution (Richard-
son, 1972) and MAP estimation (Lam and Goodman,
2000) have been proposed; however, the performance
of the reconstruction depends greatly on the correct-
ness of the blur kernel. In particular, a circular blur
kernel in a conventional aperture contains many zero
crossings in the spatial frequency, while reconstruc-
tion at a frequency of low gain is unstable with much
noise influence. Therefore, the idea of designing an
aperture shape with the desirable spatial frequency
characteristics was proposed. Hiura et al. introduced
a coded aperture to improve the preciseness of depth
estimation using the defocus phenomenon (Hiura and
Matsuyama, 1998). Similarly, Levin et al. (Levin
et al., 2007) and Veeraraghavan et al. (Veeraraghavan
et al., 2007) tried to remove the defocus effect from
a single image taken by a camera with independently
designed coded apertures. Desirable aperture shapes
have been explored by Zhou et al. (Zhou and Nayar,
2009) and Levin et al. (Levin, 2010).
Circular Annular
MURA
Image Pattern
Random
Multi-Annular
Levin
Veeraraghavan
=0.0001
=0.001
=0.008 =0.01
=0.005=0.002
=0.02 =0.03
Proposed
Figure 3. All the aperture patterns we used in our simulations.
Top two rows: Eight patterns, including circular, annular, multi-
annular, random, MURA, image pattern, Levin et al.’s pattern [13],
and Veeraraghavan et al.’s pattern [3]. Bottom two rows: Eight of
our patterns optimized for noise levels from σ =0.0001 to 0.03.
have used it in all of our comparisons and experiments. It
must be noted that similar algorithms have been advocated
in the past (see [19] for example).
6. Performance Comparison of Apertures
Before conducting real experiments, we first performed
extensive simulations to verify our aperture evaluation cri-
terion and optimization algorithm. For this, we used the 16
aperture patterns shown in Figure 3. The top 8 patterns in-
clude simple ones (circular, annular, and multi-annular) and
more complex ones proposed by other researchers [9], [13],
[3]. In addition, we have tested an “image pattern,” which
is a binarized version of the well-known Lena image, and
a random binary pattern. The bottom 8 patterns were pro-
duced by our optimization algorithm for different levels of
image noise. The performances of these 16 apertures were
evaluated via simulation over a set of 10 natural images at
eight levels of image noise.
For each aperture pattern k and each level of image noise
σ, we simulated the defocus process using Equation (1), ap-
plied defocus deblurring using Equation (13), and got an
estimate
ˆ
f
0
of the focused image f
0
. Using each deblurred
image, the quality of the aperture pattern was measured as
�
�f
0
−
ˆ
f
0
�
2
. To make this measurement more reliable, we
repeated the simulation on 10 natural images and took the
average. These results are listed in Table 2 for the 16 aper-
ture patterns and 8 levels of image noise. Our optimized
patterns perform best across all levels of noise, and the im-
provement is more significant when the noise level is low.
On the other hand, the circular (conventional) aperture is
close to optimal when the noise level is very high. While
there are different optimal apertures for different levels of
image noise, we may want a single aperture to use in a va-
riety of imaging conditions. In this case, we could pick the
optimized pattern for σ =0.001 as it performed well over a
wide range of noise levels (from σ =0.0001 to 0.01).
It is interesting to note that the image pattern (Lena)
also produces deblurring results of fairly high quality. We
believe this is because the power spectrum of the image
pattern follows the 1/f law–it successfully avoids zero-
crossings and, at the same time, has a heavy tail covering the
high frequencies. Unfortunately, the image pattern consists
of a lot of small features, which introduce strong diffraction
effects. We believe that it is for this reason that the image
pattern did not achieve as high quality results in our experi-
ments as predicted by our simulations.
7. Experiments with Real Apertures
As shown in Figure 4(a), we printed our optimized aper-
ture patterns as well as several other patterns as a single high
resolution (1 micron) photomask sheet. To experiment with
a specific aperture pattern, we cut it out of the photomask
sheet and inserted it into a Canon EF 50mm f/1.8 lens
1
.
In Figure 4(b), we show 4 lenses with different apertures
(image pattern, Levin et al.’s pattern, Veeraraghavan et al’s
pattern, and one of our optimized patterns) inserted in them,
and one unmodified (circular aperture) lens. Images of real
scenes were captured by attaching these lenses to a Canon
EOS 20D camera. As previously mentioned, we choose the
pattern which is optimized for σ =0.001, as it performs
well over a wide range of noise levels in the simulation.
To calibrate the true PSF of each of the 5 apertures, the
camera focus was set to 1.0m; a planar array of point light
sources was moved from 1.0m to 2.0m with 10cm incre-
ments; and an image was captured for each position. Each
defocused image of a point source was deconvolved using
a registered focused image of the source. This gave us PSF
estimates for each depth (source plane position) and several
locations in the image
2
. In Figure 4(c-g), two calibrated
PSFs (for depths of 120cm and 150cm) are shown for each
pattern.
7.1. Comparison Results using Test Scenes
In our first experiment, we placed a CZP resolution chart
at a distance of 150cm from the lens, and captured images
using the five different apertures. To be fair, the same expo-
sure time was used for all the acquisitions. The five captured
images and their corresponding deblurred results are shown
1
We chose this lens for its high quality and because we were able to
disassemble it to insert aperture patterns with relative ease.
2
We measured the PSF at different image locations to account for the
fact that virtually any lens (even with a circular aperture) produces a spa-
tially varying PSF.
5
Figure 2: Shapes of Zhou’s codes for various noise levels,
σ (Zhou and Nayar, 2009).
Since the size of the blur kernel varies with the
distance between the camera and object, it is neces-
sary to estimate the depth of the captured scene accu-
rately. Depth estimation through the optical defocus
effect is called Depth from Defocus, and a number of
studies on this aspect have been carried out (Schech-
ner and Kiryati, 2000). In general, it is not easy to
estimate the depth of a scene using a single image, be-
cause simultaneous estimation of the blur kernel and
latent image is under constrained. Therefore, Levin et
al. (Levin et al., 2007) assumed a Gaussian prior to
the edge power histogram to make the problem well-
conditioned. However, this method is not always ro-
bust, and the authors mentioned that human assistance
was sometimes necessary.
Several studies estimating depth from multiple
images captured by different optical parameters have
VISAPP 2012 - International Conference on Computer Vision Theory and Applications
104