Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wich-
mann, F. A., and Brendel, W. (2019). Imagenet-
trained CNNs are biased towards texture; increasing
shape bias improves accuracy and robustness. In In-
ternational Conference on Learning Representations.
Golan, I. and El-Yaniv, R. (2018). Deep anomaly detec-
tion using geometric transformations. In Advances in
Neural Information Processing Systems, pages 9758–
9769.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hendrycks, D., Mazeika, M., and Dietterich, T. (2019).
Deep anomaly detection with outlier exposure. In In-
ternational Conference on Learning Representations.
Hendrycks, D., Mu, N., Cubuk, E. D., Zoph, B., Gilmer, J.,
and Lakshminarayanan, B. (2020). AugMix: A sim-
ple data processing method to improve robustness and
uncertainty. Proceedings of the International Confer-
ence on Learning Representations (ICLR).
Hermann, K. L., Chen, T., and Kornblith, S. (2020). The
origins and prevalence of texture bias in convolutional
neural networks. In Larochelle, H., Ranzato, M., Had-
sell, R., Balcan, M., and Lin, H., editors, Advances
in Neural Information Processing Systems 33: An-
nual Conference on Neural Information Processing
Systems 2020, NeurIPS 2020, December 6-12, 2020,
virtual.
Ioffe, S. and Szegedy, C. (2015). Batch normalization: Ac-
celerating deep network training by reducing inter-
nal covariate shift. In Proceedings of the 32nd In-
ternational Conference on International Conference
on Machine Learning - Volume 37, ICML’15, page
448–456, Lille, France. JMLR.org.
Kamoi, R. and Kobayashi, K. (2020). Why is the ma-
halanobis distance effective for anomaly detection?
arXiv preprint arXiv:2003.00402.
Kingma, D. P. and Ba, J. (2015). Adam: A method for
stochastic optimization. In Bengio, Y. and LeCun,
Y., editors, 3rd International Conference on Learn-
ing Representations, ICLR 2015, San Diego, CA, USA,
May 7-9, 2015, Conference Track Proceedings.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J.,
Desjardins, G., Rusu, A. A., Milan, K., Quan, J.,
Ramalho, T., Grabska-Barwinska, A., et al. (2017).
Overcoming catastrophic forgetting in neural net-
works. Proceedings of the national academy of sci-
ences, 114(13):3521–3526.
Kornblith, S., Shlens, J., and Le, Q. V. (2019). Do better
imagenet models transfer better? In Proceedings of
the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 2661–2671.
Ledoit, O., Wolf, M., et al. (2004). A well-conditioned
estimator for large-dimensional covariance matrices.
Journal of Multivariate Analysis, 88(2):365–411.
Lee, K., Lee, K., Lee, H., and Shin, J. (2018). A simple uni-
fied framework for detecting out-of-distribution sam-
ples and adversarial attacks. In Advances in Neural
Information Processing Systems, pages 7167–7177.
Li, C.-L., Sohn, K., Yoon, J., and Pfister, T. (2021). Cut-
paste: Self-supervised learning for anomaly detection
and localization. In 2021 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR),
pages 9659–9669.
Li, H., Singh, B., Najibi, M., Wu, Z., and Davis, L. S.
(2019). An analysis of pre-training on object detec-
tion. arXiv preprint arXiv:1904.05871.
Liu, W., Li, R., Zheng, M., Karanam, S., Wu, Z., Bhanu, B.,
Radke, R. J., and Camps, O. (2020). Towards visually
explaining variational autoencoders. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR).
Liznerski, P., Ruff, L., Vandermeulen, R. A., Franks, B. J.,
Kloft, M., and M
¨
uller, K. R. (2021). Explainable deep
one-class classification. In International Conference
on Learning Representations.
Mahalanobis, P. C. (1936). On the generalized distance in
statistics. National Institute of Science of India.
Naseer, M. M., Ranasinghe, K., Khan, S. H., Hayat, M.,
Shahbaz Khan, F., and Yang, M.-H. (2021). Intriguing
properties of vision transformers. Advances in Neural
Information Processing Systems, 34.
Perera, P. and Patel, V. M. (2019). Learning deep features
for one-class classification. IEEE Transactions on Im-
age Processing, 28(11):5450–5463.
Reiss, T., Cohen, N., Bergman, L., and Hoshen, Y. (2021).
Panda: Adapting pretrained features for anomaly de-
tection and segmentation. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition (CVPR), pages 2806–2814.
Rezende, D. and Mohamed, S. (2015). Variational infer-
ence with normalizing flows. In Bach, F. and Blei,
D., editors, Proceedings of the 32nd International
Conference on Machine Learning, volume 37 of Pro-
ceedings of Machine Learning Research, pages 1530–
1538, Lille, France. PMLR.
Rippel, O., Haumering, P., Brauers, J., and Merhof, D.
(2021a). Anomaly detection for the automated visual
inspection of pet preform closures. In 2021 26th IEEE
International Conference on Emerging Technologies
and Factory Automation (ETFA), volume 1.
Rippel, O. and Merhof, D. (2021). Leveraging pre-trained
segmentation networks for anomaly segmentation. In
2021 26th IEEE International Conference on Emerg-
ing Technologies and Factory Automation (ETFA),
pages 01–04.
Rippel, O., Mertens, P., K
¨
onig, E., and Merhof, D. (2021b).
Gaussian anomaly detection by modeling the distribu-
tion of normal data in pretrained deep features. IEEE
Transactions on Instrumentation and Measurement,
70:1–13.
Rippel, O., Mertens, P., and Merhof, D. (2021c). Model-
ing the distribution of normal data in pre-trained deep
features for anomaly detection. In 2020 25th Inter-
national Conference on Pattern Recognition (ICPR),
pages 6726–6733.
Rippel, O., M
¨
uller, M., M
¨
unkel, A., Gries, T., and Merhof,
D. (2021d). Estimating the probability density func-
tion of new fabrics for fabric anomaly detection. In
Transfer Learning Gaussian Anomaly Detection by Fine-tuning Representations
55