Authors:
Tokihisa Hayakawa
;
Keiichi Nakanishi
;
Ryoya Katafuchi
and
Terumasa Tokunaga
Affiliation:
Kyushu Institute of Technology, 680-4 Kawazu Iizuka-Shi, Fukuoka, Japan
Keyword(s):
Anomaly Detection, Visual Inspection AI, Deep Learning, Visual Attention Mechanism, Self-Attention, MVTec AD, Plant Science.
Abstract:
Recently, the visual attention mechanism has become a promising way to improve the performance of Convolutional Neural Networks (CNNs) for many applications. In this paper, we propose a Layer-wise External Attention mechanism for efficient image anomaly detection. The core idea is the integration of unsupervised and supervised anomaly detectors via the visual attention mechanism. Our strategy is as follows: (i) prior knowledge about anomalies is represented as an anomaly map generated by the pre-trained network; (ii) the anomaly map is translated to an attention map via an external network. (iii) the attention map is then incorporated into intermediate layers of the anomaly detection network via visual attention. Notably, the proposed method can be applied to any CNN model in an end-to-end training manner. We also propose an example of a network with Layer-wise External Attention called Layer-wise External Attention Network (LEA-Net). Through extensive experiments using real-world da
tasets, we demonstrate that Layer-wise External Attention consistently boosts the anomaly detection performances of an existing CNN model, even on small and unbalanced data. Moreover, we show that Layer-wise External Attention works well with Self-Attention Networks.
(More)