
5 CONCLUSION
The DROOD method is based on a statistical frame-
work for OOD detection. It is a successive syn-
thesis of statistics using all the features produced
by a DNN. The experimental study shows very
good detection performances compared to state-of-
the-art methods with two image classification net-
works based on CNNs and one based on transformers,
which also demonstrates its ability to perform what-
ever the model.
We observed variations in performance depend-
ing on the DNN chosen and the OOD method, which
seems in a certain way normal. However, some ex-
isting OOD detection methods appear to be linked
to specific neural network architectures, since perfor-
mances vary considerably when applied with others.
Experiments suggest that our DROOD detection ap-
proach is more robust than others.
As further work, It would be of course interesting
to test other distances than the Euclidean distance. As
mentioned above, in the transformer architecture, the
”class token” gathers information from the ”image to-
kens” across the transformer encoding layers for the
final classification task. One can therefore expect that
the max operation in MaSF and DROOD methods can
be effectively replaced by the use of this ”class to-
ken”. Finally, it would also be interesting to experi-
ment with this type of approach in other application
fields, such as audio analysis or image segmentation.
REFERENCES
Carvalho, T. M., Vellasco, M. M. B. R., and do Amaral, J.
F. M. Out-of-distribution detection in deep learning
models: A feature space-based approach. In Interna-
tional Joint Conference on Neural Networks, IJCNN,
Gold Coast, Australia, June 18-23, 2023, pages 1–7.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer,
M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby,
N. An image is worth 16x16 words: Transformers
for image recognition at scale. In International Con-
ference on Learning Representations, ICLR, May 3-7,
2021.
Dziedzic, A., Rabanser, S., Yaghini, M., Ale, A., Erdogdu,
M. A., and Papernot, N. p-dknn: Out-of-distribution
detection through statistical testing of deep represen-
tations. ArXiv, 2022.
Haroush, M., Frostig, T., Heller, R., and Soudry, D. A statis-
tical framework for efficient out of distribution detec-
tion in deep neural networks. In International Confer-
ence on Learning Representations, ICLR, April 25-29,
2022.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual
learning for image recognition. In IEEE Conference
on Computer Vision and Pattern Recognition, CVPR,
June 27-30, 2016, Las Vegas, NV, USA, pages 770–
778.
Hendrycks, D. and Gimpel, K. A baseline for detecting mis-
classified and out-of-distribution examples in neural
networks. In International Conference on Learning
Representations, ICLR, April 24-26, 2017, Toulon,
France.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger,
K. Q. Densely connected convolutional networks.
In IEEE Conference on Computer Vision and Pattern
Recognition, CVPR, 2017, Honolulu, HI, USA, pages
2261–2269.
Kaur, R., Jha, S., Roy, A., Park, S., Dobriban, E., Sokol-
sky, O., and Lee, I. idecode: In-distribution equivari-
ance for conformal out-of-distribution detection. In
AAAI Conference on Artificial Intelligence, 2022,
volume 36, pages 7104–7114.
Krizhevsky, A. Learning multiple layers of features from
tiny images. Technical report, University of Toronto,
2009, Toronto, Ontario.
Le, Y. and Yang, X. Tiny imagenet visual recognition chal-
lenge. CS 231N, 2015, 7(7):3.
Lee, K., Lee, K., Lee, H., and Shin, J. A simple unified
framework for detecting out-of-distribution samples
and adversarial attacks. In Bengio, S., Wallach, H. M.,
Larochelle, H., Grauman, K., Cesa-Bianchi, N., and
Garnett, R., editors, Advances in Neural Information
Processing Systems 31: Annual Conference on Neural
Information Processing Systems, NeurIPS, December
3-8, 2018, Montr
´
eal, Canada.
Li, J., Li, S., Wang, S., Zeng, Y., Tan, F., and Xie,
C. Enhancing out-of-distribution detection with
multitesting-based layer-wise feature fusion. In IEEE
Conference on Artificial Intelligence, CAI, 25-27
June, 2024, Singapore, pages 510–517.
Liang, S., Li, Y., and Srikant, R. Enhancing the reliability
of out-of-distribution image detection in neural net-
works. In International Conference on Learning Rep-
resentations, ICLR, April 30 - May 3, 2018, Vancou-
ver, BC, Canada.
Liu, W., Wang, X., Owens, J., and Li, Y. Energy-based
out-of-distribution detection. In Larochelle, H., Ran-
zato, M., Hadsell, R., Balcan, M., and Lin, H., editors,
Advances in Neural Information Processing Systems,
2020, volume 33, pages 21464–21475.
Malinin, A. and Gales, M. Predictive uncertainty estimation
via prior networks. Advances in Neural Information
Processing Systems 31: Annual Conference on Neural
Information Processing Systems, NeurIPS, December
3-8, 2018, Montr
´
eal, Canada.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng,
A. Y., et al. Reading digits in natural images with
unsupervised feature learning. In NIPS workshop
on deep learning and unsupervised feature learning,
2011, Granada, page 4.
Raghuram, J., Chandrasekaran, V., Jha, S., and Banerjee,
S. A general framework for detecting anomalous in-
puts to dnn classifiers. In International Conference on
Machine Learning, ICML, 2021, pages 8764–8775.
DNN Layers Features Reduction for Out-of-Distribution Detection
77