Some methods exist which require additional in-
formation, such as (Kopf et al., 2008; Narasimhan and
Nayar, 2003). These methods require additional depth
knowledge of the scene.
The first contribution of this research is the exten-
sion of existing dehazing algorithms to use sequences
of images captured over time. An optimization al-
gorithm enforces a constant depth constraint over the
sequence of images and allows the decomposition of
the scene into a sequence of atmospheric scattering
coefficients and a relative depthmap. The second con-
tribution is an experimental comparison of dehazing
algorithms, in the context of measuring atmospheric
scattering and depth recovery, using both simulation
and depth measurements from real data.
2 BACKGROUND
In computer vision and graphics, the widely used
model for describing the formation of haze in images
is as follows (Tan, 2008; Fattal, 2008; Narasimhan
and Nayar, 2000; Narasimhan and Nayar, 2002):
I = Re
−βz
+ A
∞
(1 − e
−βz
) (1)
where I is the observed image intensity, R is the
scene radiance, A
∞
is the global atmospheric airlight,
β is the atmospheric scattering coefficient and z is
the depth of the scene. The term e
−βz
represents the
medium transmission describing the fraction of light
that is not scattered as it passes through the medium.
This model assumes a homogeneous atmosphere.
The term Re
−βz
of equation 1 is known as the di-
rect transmission, and the second term A
∞
(1 − e
−βz
)
is known as airlight. Direct transmission is the part of
scene radiance that eventually reaches the viewpoint
after suffering attenuation as it passes through the
medium. Airlight results from light scattered by the
medium towards the viewpoint and causes an increase
in brightness as the depth of the scene increases.
This research focuses on three existing dehazing
methods. Each of them dehaze an image of a scene
and produce a relative depthmap scaled to the atmo-
spheric scattering coefficient β (in equation 1). The
polarization-based dehazing method uses a sequence
of two or more images captured with different degrees
of polarization. The degree of polarization is varied
by varying the angle of a polarizer filter attached to a
camera. Using the difference in intensity of airlight
between the images and the haze image model (equa-
tion 1) the scene is dehazed and a depthmap is pro-
duced. The ’Dichromatic Framework’ measures at-
mospheric scattering using changes in weather condi-
tions. The color of a scene point is modeled as a linear
combination of direct transmission and airlight vec-
tors in a color space. The color of a scene point may
vary anywhere within the plane (dichromatic plane)
defined by the vectors. The ’Dark Channel’ method
uses a statistical prior to dehaze a haze-filled input
image. It uses the observation that outdoor haze-filled
images are composed of local patches which contain
pixels with very low intensity in at least one color
channel. This is due to the overwhelming presence
of shadows, colorful surfaces and dark surfaces.
3 CONSTANT DEPTH
CONSTRAINT
The Constant Depth Constraint (CDC) is based on the
fact that the scene being captured in the form of im-
ages over time has constant depth. This algorithm
is based on the haze image model (equation 1). The
model is rewritten as follows:
I
c
i
(x) = R
c
(x)T
i
(x) + A
c
(1 − T
i
(x)) (2)
T
i
(x) = e
−β
i
z(x)
(3)
The superscript signifies that the coefficient is defined
over the three color channels (RGB). The subscript
signifies that the variable varies over time. T
i
(x) is
the global transmittance of each image captured at
time i. x is the spatial index corresponding to square
patches over the images. The scene radiance R
c
(x)
varies as the illumination due to the sun in the scene
changes direction. This variation is assumed to be
small over the limited period of time the sequence of
images are captured over (see Figure 2). In addition to
this assumption, we performed a normalization tech-
nique on the sequence of images captured of the same
scene. We choose a flat surface in the scene and nor-
malized our radiance measurements by the radiance
of this surface. As the illumination changes in the
scene, the radiance of this flat surface varies with it
and by using this to normalize the scene radiance of
each image we essentially factor out the variation be-
tween the sequence of images. The flat surface chosen
in the scene can be seen in Figure 1.
The CDC constraint is used to form an opti-
mization algorithm to recover a sequence of atmo-
spheric scattering coefficients corresponding to the
sequence of images captured over time and one sin-
gle depthmap of the scene. The algorithm is based on
the relation between the transmittance, atmospheric
scattering and depth in equation 3 under the con-
straint of constant depth of the scene (CDC). This
MEASURING ATMOSPHERIC SCATTERING FROM DIGITAL IMAGE SEQUENCES
377