next. First, an object is placed in a known position
(depth) from the camera, M/P is calculated and the
match M/P-Depth is recorded. This is repeated for a
variety of object positions, getting a table of M/P-
Depth matches. Then, a curve adjustment is
performed getting a polynomial which is the
searched equation. In this work a 300 steps zoom is
used from the ZoomWide. The obtained equation is
5881210621210524
1096106091
122
3344
.Y.Y.
Y.Y.dist
+×+×
+×+×=
−−
−−
(3)
The depth estimation is realized over 16x16 pixel
regions. First, k images are captured, M/P is
calculated for each region in the scene and equation
3 is used to estimate the depth of each region where
Y is M/P value. Figure 3 shows the result of
applying this method over a scene.
2.2.4 User Interfase
SIVEDI has a user interface developed with
MATLAB subroutines. The user has complete
access and control over the system to specify the
analysis to perform. When the user interface is
executed the main window appears and shows if the
system is calibrated or not. In order to be calibrated
the system must contain the equation Focus Step-
Depth to be applied in SFF and the equation M/P-
Depth to be applied in SFD. If the user wishes to
calibrate the system, a window appears that guides
the user in this task. Once the system is calibrated,
the main window shows the options to perform the
analysis SFF or SFD. For any analysis it is possible
to record the obtained estimation. The default
parameters for the analysis are: the region size of
analysis (16x16 pixels), the quantity of captured
images for SFF (25 images), the increment in focus
steps at which the images are captured for SFF, the
quantity of captured imaged for SFD (7 images), the
focus steps at which the images are captured for
SFD, the focus measure (Laplacian), and the
estimation range (7 to 50 centimetres). If any of
these parameters should be modified, it is done
directly in the implemented subroutines in m files.
3 CONCLUSIONS
The execution time of SFF is reduced, because the
quantity of captured images is reduced and an
interpolation is used among three images to obtain
the maximum measure. The noise is minimized
using the average of four images for the SFF
analysis. The implemented platform prevents the use
of two images in the SFD analysis, as a consequence
a method to determine the needed captured images
to perform the SFD analysis is proposed.
(a) (b)
Figure 3: a) Analyzed scene and b) obtained estimation
using SFD
Also, the use of an equation (M/P) to calculate the
relative defocus among the captured images is
proposed. This measure is related with the depth of
the object. The noise is reduced using the average of
eight images for the SFD analysis.
The obtained results show that SFD is less sensitive
to noise than SFF, but is more sensitive to spectral
content over the analyzed regions. It is harder to
calibrate SFD than SFF. The execution time is lower
for SFD than for SFF. Images must contain high
contrast in order to both techniques work properly.
REFERENCES
Hecht E., 2000. “Optica”, Addison Wesley, Madrid,
Third Edition.
Jenn K. T1997. “Analysis and Application of
Autofocusing and Three-Dimensional Shape
Recovery Techniques based on Image Focus and
Defocus”, PhD Thesis, SUNY in Stony Brook,.
Nayar S.K., M. Watanabe and M. Noguchi 1995. “Real-
time focus range sensor”, Intl. Conference on
Computer Vision, pp. 995-1001, June.
Subbarao M., “Spatial-Domain Convolution
Deconvolution Transform”, Technique Report No.
91.07.03, Computer Vision Laboratory, State
University of New York, Stony Brook, NY 11794-
2350.
Subbarao M, T. Choi, y A. Nikzad 1992. “Focusing
Techniques”, SPIE Conference, OE/TECHNOLOGY
`92, Vol. 1823, pp 163-174, Boston, MA.
Subbarao M. y M.C. Lu, 1994. “Computer Modeling and
Simulation of Camera Defocus”, Machine Vision and
Applications, pp 277-289.
Subbarao M., y J.K. Tyan, “The Optimal Focus Measure
for Passive Autofocusing and Depth-from-Focus”,
SPIE Conference on Videometrics IV, Vol. 2598,
Philadelphia PA, pp. 89-99, October 1999.
ICINCO 2005 - ROBOTICS AND AUTOMATION
478