Authors:
Peter Lorenz
1
;
Margret Keuper
2
;
3
and
Janis Keuper
1
;
4
Affiliations:
1
ITWM Fraunhofer, Kaiserslautern, Germany
;
2
University of Siegen, Germany
;
3
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
;
4
IMLA, Offenburg University, Germany
Keyword(s):
Adversarial Examples, Detection.
Abstract:
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m
argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
(More)