
REFERENCES
Ackermann, J., Sakaridis, C., and Yu, F. (2023). Masko-
maly: Zero-shot mask anomaly segmentation. In The
British Machine Vision Conference (BMVC).
Chan, R., Lis, K., Uhlemeyer, S., Blum, H., Honari, S.,
Siegwart, R., Fua, P., Salzmann, M., and Rottmann,
M. (2021). Segmentmeifyoucan: A benchmark for
anomaly segmentation. In Thirty-fifth Conference on
Neural Information Processing Systems Datasets and
Benchmarks Track.
Deli
´
c, A., Grci
´
c, M., and
ˇ
Segvi
´
c, S. (2024). Outlier detec-
tion by ensembling uncertainty with negative object-
ness.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977).
Maximum likelihood from incomplete data via the em
algorithm. Journal of the royal statistical society: se-
ries B (methodological), 39(1):1–22.
Di Biase, G., Blum, H., Siegwart, R., and Cadena, C.
(2021). Pixel-wise anomaly detection in complex
driving scenes. In 2021 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR),
pages 16913–16922.
Durkan, C., Bekasov, A., Murray, I., and Papamakarios,
G. (2019). Neural spline flows. In Wallach, H.,
Larochelle, H., Beygelzimer, A., d'Alch
´
e-Buc, F.,
Fox, E., and Garnett, R., editors, Advances in Neural
Information Processing Systems, volume 32. Curran
Associates, Inc.
Fahrmeir, L., Hamerle, A., and Tutz, G., editors (1996).
Multivariate statistische Verfahren. De Gruyter,
Berlin, Boston.
Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian
approximation: Representing model uncertainty in
deep learning. In ICML.
Galesso, S., Argus, M., and Brox, T. (2023). Far away
in the deep space: Dense nearest-neighbor-based out-
of-distribution detection. In 2023 IEEE/CVF In-
ternational Conference on Computer Vision Work-
shops (ICCVW), pages 4479–4489, Los Alamitos,
CA, USA. IEEE Computer Society.
Grci
´
c, M.,
ˇ
Sari
´
c, J., and
ˇ
Segvi
´
c, S. (2023). On advantages
of mask-level recognition for outlier-aware segmenta-
tion. In CVPR Workshops.
Grci
´
c, M., Bevandi
´
c, P., Kalafati
´
c, Z., and
ˇ
Segvi
´
c, S.
(2023). Dense out-of-distribution detection by robust
learning on synthetic negative data.
Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. (2017).
On calibration of modern neural networks. In ICML.
Hendrycks, D. and Gimpel, K. (2017). A baseline for de-
tecting misclassified and out-of-distribution examples
in neural networks. In ICLR.
Hendrycks, D., Mazeika, M., and Dietterich, T. (2019a).
Deep anomaly detection with outlier exposure. Pro-
ceedings of the International Conference on Learning
Representations.
Hendrycks, D., Mazeika, M., and Dietterich, T. (2019b).
Deep anomaly detection with outlier exposure. In
ICLR.
Hoogeboom, E., Van Den Berg, R., and Welling, M.
(2019). Emerging convolutions for generative nor-
malizing flows. In Chaudhuri, K. and Salakhutdi-
nov, R., editors, Proceedings of the 36th International
Conference on Machine Learning, volume 97 of Pro-
ceedings of Machine Learning Research, pages 2771–
2780. PMLR.
Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham,
H., Le, Q., Sung, Y.-H., Li, Z., and Duerig, T. (2021).
Scaling up visual and vision-language representation
learning with noisy text supervision. In Meila, M. and
Zhang, T., editors, Proceedings of the 38th Interna-
tional Conference on Machine Learning, volume 139
of Proceedings of Machine Learning Research, pages
4904–4916. PMLR.
Jiang, H., Kim, B., Guan, M., and Gupta, M. (2018). To
trust or not to trust a classifier. In NeurIPS.
Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C.,
Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C.,
Lo, W.-Y., Dollar, P., and Girshick, R. (2023). Seg-
ment anything. In Proceedings of the IEEE/CVF In-
ternational Conference on Computer Vision (ICCV),
pages 4015–4026.
Kobyzev, I., Prince, S. J., and Brubaker, M. A. (2021). Nor-
malizing flows: An introduction and review of current
methods. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 43(11):3964–3979.
Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017).
Simple and scalable predictive uncertainty estimation
using deep ensembles. In NeurIPS.
Lis, K., Honari, S., Fua, P., and Salzmann, M. (2023). De-
tecting road obstacles by erasing them. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence,
pages 1–11.
Maag, K., Chan, R., Uhlemeyer, S., Kowol, K., and
Gottschalk, H. (2022). Two video data sets for track-
ing and retrieval of out of distribution objects. In Pro-
ceedings of the Asian Conference on Computer Vision,
pages 3776–3794.
Minderer, M., Djolonga, J., Romijnders, R., Hubis, F. A.,
Zhai, X., Houlsby, N., Tran, D., and Lucic, M. (2021).
Revisiting the calibration of modern neural networks.
In NeurIPS.
Mukhoti, J. and Gal, Y. (2018). Evaluating bayesian
deep learning methods for semantic segmentation.
1811.12709.
Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and
Lakshminarayanan, B. (2019). Do deep generative
models know what they don’t know? International
Conference on Learning Representations.
Nayal, N., Shoeb, Y., and G
¨
uney, F. (2024). A likelihood
ratio-based approach to segmenting unknown objects.
Nayal, N., Yavuz, M., Henriques, J. F., and G
¨
uney, F.
(2023). Rba: Segmenting unknown regions rejected
by all. In ICCV.
Nekrasov, A., Hermans, A., Kuhnert, L., and Leibe, B.
(2023). UGainS: Uncertainty Guided Anomaly In-
stance Segmentation. In GCPR.
Neyman, J. and Pearson, E. S. (1933). Ix. on the problem of
the most efficient tests of statistical hypotheses. Philo-
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
314