Since the ground truth undistorted images of these
scenes are not available, we cannot evaluate the accu-
racy of these generated images numerically. However,
we can still find that the visibility of the scene in (b) is
better than that in (a) and (c). For example, the white
lane markers and the road guardrails are heavily dis-
torted in (a) and (c), but they look more accurate in
(b).
Although, our method outperforms the existing
state-of-the-art method, we also find that our method
is not perfect, and we need more improvements. In
particular, the heavy rain model used to generate the
training dataset needs to be improved. A more ac-
curate heavy rain model would lead to more accurate
raindrop removal.
6 CONCLUSIONS
In this paper, we proposed a new method for remov-
ing image distortion caused by raindrops under heavy
rain.
In heavy rain, raindrops form a non-uniform film
on the windshield, and the visibility for a driver de-
grades drastically. The existing raindrop removal
methods cannot recover clear images in such situ-
ations, since these methods assume that the back-
ground scene is visible through the gap between the
raindrops, which does not happen anymore in heavy
rain. Thus, we in this paper proposed a new method
for recovering raindrops removal images from the se-
ries of distorted images. The results of our exper-
iments show that the proposed method outperforms
the state-of-the-art raindrop removal method in heavy
rain situations.
The proposed method is promising, but challenges
remain. In our proposed method, image degradation
due to raindrops is considered. However, in actual
heavy rains, image degradation due to rain streaks
also exists, so it is desirable to expand to a method
that improves both of these degradations.
Furthermore, it is also important to use real heavy
rain images to train the network for improving the ac-
curacy. Since the ground truth undistorted images are
not available under heavy rain, we need to consider
unsupervised leaning in the raindrop removal frame-
work.
REFERENCES
Chen, Y. and Hsu, C. (2013). A generalized low-rank ap-
pearance model for spatio-temporally correlated rain
streaks. In Proc. of International Conference on Com-
puter Vision, pages 1968–1975.
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler,
M., Benenson, R., Franke, U., Roth, S., and Schiele,
B. (2016). The cityscapes dataset for semantic urban
scene understanding. In Proc. of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR).
Fu, X., Huang, J., Ding, X., Liao, Y., and Paisley, J. (2017).
Clearing the skies: A deep network architecture for
single-image rain removal. IEEE Transactions on Im-
age Processing, 26(6):2944–2956.
Garg, K. and Nayar, S. (2004). Detection and removal of
rain from videos. In Proc. Conference on Computer
Vision and Pattern Recognition (CVPR), volume 1.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017).
Image-to-image translation with conditional adversar-
ial networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR),
pages 1125–1134.
Li, Y., Tan, R., Guo, X., Lu, J., and Brown, M. (2017).
Single image rain streak separation using layer priors.
IEEE Transactions on Image Processing.
Luo, Y., Xu, Y., and Ji, H. (2015). Removing rain from
a single image via discriminative sparse coding. In
Proc. of International Conference on Computer Vision
(ICCV), pages 3397–3405.
Matsui, T., Sakaue, F., and Sato, J. (2014). Raindrop re-
moval by using camera array system. In IEEE 17th In-
ternational Conference on Intelligent Transportation
Systems (ITSC), pages 2249–2250. IEEE.
N., B. and N., L. (2008). Using the shape characteristics of
rain to identify and remove rain from video. In Proc.
of Joint International Workshops on Statistical Tech-
niques in Pattern Recognition and Structural and Syn-
tactic Pattern Recognition, volume LNCS5342, pages
pp 451–458.
Nomoto, K., Sakaue, F., and Sato, J. (2011). Raindrop com-
plement based on epipolar geometry and spatiotem-
poral patches. In Proc. of International Conference
on Computer Vision Theory and Applications, pages
175–180.
Qian, R., Tan, R. T., Yang, W., Su, J., and Liu, J. (2018).
Attentive generative adversarial network for raindrop
removal from a single image. In The IEEE Conference
on Computer Vision and Pattern Recognition (CVPR).
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net:
Convolutional networks for biomedical image seg-
mentation. In International Conference on Medical
image computing and computer-assisted intervention,
pages 234–241. Springer.
Santhaseelan, V. and Asari, V. (2014). Utilizing local phase
information to remove rain from video. International
Journal of Computer Vision, pages 1–19.
Yamashita, A., Fukuchi, I., and Kaneko, T. (2009). Noises
removal from image sequences acquired with moving
camera by estimating camera motion from spatiotem-
poral information. In Proc. of International Confer-
ence on Intelligent Robots and Systems, pages 3794–
3801.