first few images from the sun flickering effects. In our
implemented system, we used first the 25 frames for
this step.
2. Get the new image in the sequence I
0,k
assum-
ing that the previous images I
0,k−1
, . . . , I
0,k−N
have
been recovered after sunflicker removal (I
R,k−1
, . . . ,
I
R,k−N
). Advance time k (i.e. all previous data struc-
tures that had index k now have index k − 1).
3. Predict the flicker pattern by
• Warping all the filtered version of the difference im-
ages, H
k−1
, . . . , H
k−N
with respect to the current frame
I
0,k
to be recovered , assuming that M
k,k−1
≈ M
k−1,k−2
.
All other previous homographies were obtained from
actual image matches and thus known previously.
• Learning the sunflicker pattern from the registered fil-
tered difference images. After registration, the motion
of the camera is being compensated. In the learning
phase, all the difference images of the previous frames
H (H
k−1
, . . . , H
k−N
) are converted into a column ma-
trix. Using the array of all the registered H (H
k−1
, . . . ,
H
k−N
) a large matrix called W
t−1
is created having P
rows and N columns. P is the number of pixels per
frame and N is the total number of frames in the learn-
ing sequence.
• Predicting the
ˆ
H
k
using the learned model. For learn-
ing, open loop linear dynamic model (Doretto et al.,
2003) is being used. In this step the last frame H
k−1
of the learned sequence is considered as the first frame
for the synthesize part.
4. Create the correction factor
ˆ
C
k
using the pre-
dicted low pass filtered version of the difference im-
age
ˆ
H
k
and the approximate difference image
ˆ
I
d,k
. To
find the approximate difference image, the previously
recovered image I
R,K−1
is warped into the current
frame I
0,k
position using the last homography. Using
the warped portion of the previous recovered image
and the rest from current original image, a approxi-
mate median image is created, which is fused into the
system to find the approximate difference image,
ˆ
I
d,k
of the current frame. Using this approximated differ-
ence image
ˆ
I
d,k
and the predicted
ˆ
H
k
, the correction
factor
ˆ
C
k
is found.
5. Apply predicted correction factor to I
0,k
. Us-
ing the correction factor
ˆ
C
k
found in the last step, the
current image is approximately recovered from sun-
flicker effect. This recovered image is denoted by
ˆ
I
R,k
.
6. Perform image registration between
ˆ
I
R,k
and
I
R,k−1
. From this obtain the real M
k,k−1
.
7. Update I
M,k
. Using the motion compensated
filtering method (Gracias et al., 2008) create a median
image for the current frame using the last few original
frames. In this case, use the
ˆ
I
R,k
to do the registration
during finding the current median image, I
M,k
.
8. Obtain final I
R,k
. Using I
M,k
find the real differ-
ence image for the current frame, I
d,k
and the correct
sunflicker pattern H
k
and later the correct recover im-
age I
R,k
removing the sunlight properly.
9. Go to step 2 and do the same for the next frames
The image registration is performed using the
classic approach of robust model based estimation
using Harris corner detection (Harris and Stephens,
1988) and normalized cross–correlation (Zhao et al.,
2006). This method proved more resilient to strong
illumination differences when compared with the re-
sult doing the same with SIFT (Lowe, 2004). Further-
more, the operation is considerably faster since the
search areas are small.
We assume the knowledge of the gamma values
for each color channel. For unknown gamma val-
ues one can apply blind gamma estimation. An ef-
ficient method is described in (Farid, 2001; Farid and
Popescu, 2001), which exploits the fact that gamma
correction introduces specific higher-order correla-
tions in the frequency domain. Having the gamma
values we transform the intensities to linear scale.
After the deflickering the final output images are
transformed into the sRGB space with the prescribed
gamma value.
The steps above are applied over each color chan-
nel independently. Strong caustics lead to overex-
posure and intensity clipping in one or more of the
color channels, resulting in chromaticity changes in
the original images. These clippings typically affect
different regions of the images over time, given the
non-stationary nature of the caustics. The median is
not affected by such transient clippings, whereas the
average is. The low pass filtering is performed using
a fourth order Butterworth filter (Kovesi, 2009), with
a manually adjusted cutoff frequency.
Due to the camera motion, the stack of warped dif-
ference images described in step 3, may not cover the
entire area of the current frame. If one considers the
whole area of the current frame, this creates a condi-
tion of missing data in the W matrix for PCA. To cir-
cumvent this condition, only the maximum sized area
of the current frame which is present in each warped
frame at current frame location, is considered.
5 SELECTED RESULTS
The performance evaluation of the proposed system
was done comparing with the previously available of-
fline method (Gracias et al., 2008) by evaluating both
on several test datasets of shallow water video se-
quences having distinct refracted sunlight conditions.
The main evaluation criterion is the number of inliers
found per time–consecutive image pair in each reg-
istration step. This criterion was found to be a good
VISAPP 2012 - International Conference on Computer Vision Theory and Applications
164