constraint to remove mismatches, the approximate
reliability measure to detect mismatches is used, in
order to selectively assign disparities to pixels when
the corresponding reliabilities exceed a given
threshold. A generalized ground control points
(GGCPs) scheme is used in (Kim et al., 2005),
where multiple disparity candidates are assigned to
all pixels by local matching using the oriented
spatial filters.
A different method is presented in (Boykov et
al., 2001). Using graph cuts, dense features are
defined and extracted during the correspondence
process. The boundary condition is enforced to the
whole boundary of a dense feature, producing
accurate results in areas where features are detected
and no matches in featureless regions. A similar
algorithm is presented in (Veksler, 2002), where
dense features are defined as sets of connected
pixels such that the intensity edges on the boundary
of these sets are stronger than their matching errors.
After computing all dense features, pixels that
belong to a dense feature will be assigned with the
same disparity value.
3 PROPOSED ALGORITHM
3.1 Pre-Processing and Disparity
Estimation
Since in many practical cases the initial intensity
values are unreliable, a Laplacian prefilter is applied
first in the initial frames for intensity normalization.
Then, a weighted mean filter is used to reduce the
noise on the initial disparity estimation. The filter
can be described by the following equation:
11
(,) (( 1,) ( 1,)) (,)
42
xy fx y fx y fxy= −++ +
(1)
where
f is the original image, and F the filtered one.
Of course, a two-dimensional filter produces better
results, but also increases the computational cost.
Then, assuming that the source images are
rectified, the matching cost for a scanline is
calculated using the Absolute Differences (AD) of
intensities, which is given by the following equation:
(, ) min ( (, ) ( , ))dxy I xy I x Dy
DL R
=−+
(2)
where D is the disparity value that belongs to the
interval
[0, d
max
] and I
L
,
I
R
are the intensity values in
the left and right image, respectively.
3.2 Post-processing
While an AD algorithm is fast and simple, it does
not exhibit high accuracy and introduces several
mismatches in the initial disparity maps. Thus, an
efficient post-processing filtering is required.
Typical linear or ordered filtering techniques have
performed inadequately, as they tend to oversmooth
objects and distort their edges. A new non-linear
filtering technique is proposed instead.
Assuming that the scene is piecewise constant,
a mode filtering is applied first in the initial disparity
map. It is based on the ranking of the pixels in a
small neighborhood according to their disparity
values. Then, the mode value in this ordered list can
be used as the depth value for the central pixel. Of
course, the computational effort required rises
quickly with the number of disparity values to be
sorted. For this reason, a 3x3 neighborhood is
chosen, although an increase in the number of
neighbor pixels contributes to better results.
Next, an one-dimensional filtering technique is
employed, in order to incorporate in a
computationally efficient manner all the available
disparity information between scanlines. Two
horizontal and two vertical simple filters are used to
modify single pixels with different values in a small
neighborhood, while two adaptive filters are used in
larger areas. Since the incorrect reconstructions are
randomly distributed on the initial disparity maps, a
soft modification procedure is adopted, where
incorrect disparities are gradually replaced, making
at the same time the reliable areas more reliable.
In order to separate the incorrect disparities
from the correct ones, the following heuristics are
used:
1.
Any reliable area in the disparity map must have
more than 3 pixels of the same disparity value in
range. Any area smaller than this will be an
unreliable one and its disparity values will be set to
undefined.
2.
Any undefined area between a near and a far
object belongs to the near object. This heuristic may
be justified by the observation that these undefined
areas are mainly caused by occlusions, where far
objects are occluded by near objects.
Although it is difficult to determine accurate
depth values at object boundaries, experimental
results show that these heuristics work well in
practice and produce satisfactory results. Next, we
will examine the post-processing filters separately
and then we will present the block diagram of the
proposed algorithm. The rules for the two horizontal
simple filters are as follows:
A FAST POST-PROCESSING TECHNIQUE FOR REAL-TIME STEREO CORRESPONDENCE
491