Authors:
Yongxin Wang
1
and
Duminda Wijesekera
2
Affiliations:
1
Department of Computer Science, George Mason University, 4400 University Dr, Fairfax, VA 22030, U.S.A.
;
2
Department of Computer Science and Cyber Security Engineering, George Mason University, 4400 University Dr, Fairfax, VA 22030, U.S.A.
Keyword(s):
Color-thermal Image Pairs, Pixel Invisibility, Cross-modality Distillation.
Abstract:
Deep neural networks have been very successful in image recognition. In order for those results to be useful for driving automatons require quantifiable safety guarantees during night, dusk, dawn, glare, fog, rain and snow. In order to address this problem, we developed an algorithm that predicts a pixel-level invisibility map for color images that does not require manual labeling - that computes the probability that a pixel/region contains objects that are invisible in color domain, during light challenged conditions such as day, night and fog. We do so by using a novel use of cross modality knowledge distillation from color to thermal domain using weakly-aligned image pairs obtained during daylight and construct indicators for the pixel-level invisibility by mapping both the color and thermal images into a shared space. Quantitative experiments show good performance of our pixel-level invisibility masks and also the effectiveness of distilled mid-level features on object detection
in thermal imagery.
(More)