Authors:
Chris H. Bahnsen
1
;
David Vázquez
2
;
Antonio M. López
3
and
Thomas B. Moeslund
1
Affiliations:
1
Visual Analysis of People Laboratory, Aalborg University and Denmark
;
2
Element AI and Spain
;
3
Computer Vision Center, Universitat Autònoma de Barcelona and Spain
Keyword(s):
Rain Removal, Traffic Surveillance, Image Denoising.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Image Enhancement and Restoration
;
Image Formation and Preprocessing
;
Motion, Tracking and Stereo Vision
;
Video Surveillance and Event Detection
Abstract:
Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed fr
ames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.
(More)