loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Tokihisa Hayakawa ; Keiichi Nakanishi ; Ryoya Katafuchi and Terumasa Tokunaga

Affiliation: Kyushu Institute of Technology, 680-4 Kawazu Iizuka-Shi, Fukuoka, Japan

Keyword(s): Anomaly Detection, Visual Inspection AI, Deep Learning, Visual Attention Mechanism, Self-Attention, MVTec AD, Plant Science.

Abstract: Recently, the visual attention mechanism has become a promising way to improve the performance of Convolutional Neural Networks (CNNs) for many applications. In this paper, we propose a Layer-wise External Attention mechanism for efficient image anomaly detection. The core idea is the integration of unsupervised and supervised anomaly detectors via the visual attention mechanism. Our strategy is as follows: (i) prior knowledge about anomalies is represented as an anomaly map generated by the pre-trained network; (ii) the anomaly map is translated to an attention map via an external network. (iii) the attention map is then incorporated into intermediate layers of the anomaly detection network via visual attention. Notably, the proposed method can be applied to any CNN model in an end-to-end training manner. We also propose an example of a network with Layer-wise External Attention called Layer-wise External Attention Network (LEA-Net). Through extensive experiments using real-world da tasets, we demonstrate that Layer-wise External Attention consistently boosts the anomaly detection performances of an existing CNN model, even on small and unbalanced data. Moreover, we show that Layer-wise External Attention works well with Self-Attention Networks. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.222.166.127

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Hayakawa, T.; Nakanishi, K.; Katafuchi, R. and Tokunaga, T. (2023). Layer-wise External Attention for Efficient Deep Anomaly Detection. In Proceedings of the 3rd International Conference on Image Processing and Vision Engineering - IMPROVE; ISBN 978-989-758-642-2; ISSN 2795-4943, SciTePress, pages 100-110. DOI: 10.5220/0011856800003497

@conference{improve23,
author={Tokihisa Hayakawa. and Keiichi Nakanishi. and Ryoya Katafuchi. and Terumasa Tokunaga.},
title={Layer-wise External Attention for Efficient Deep Anomaly Detection},
booktitle={Proceedings of the 3rd International Conference on Image Processing and Vision Engineering - IMPROVE},
year={2023},
pages={100-110},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011856800003497},
isbn={978-989-758-642-2},
issn={2795-4943},
}

TY - CONF

JO - Proceedings of the 3rd International Conference on Image Processing and Vision Engineering - IMPROVE
TI - Layer-wise External Attention for Efficient Deep Anomaly Detection
SN - 978-989-758-642-2
IS - 2795-4943
AU - Hayakawa, T.
AU - Nakanishi, K.
AU - Katafuchi, R.
AU - Tokunaga, T.
PY - 2023
SP - 100
EP - 110
DO - 10.5220/0011856800003497
PB - SciTePress