
replay-based methods to analyze whether uncertainty
provides complementary information to improve the
sample selection they perform.
ACKNOWLEDGEMENTS
This work has been partially supported by the Span-
ish project PID2022-136436NB-I00 (AEI-MICINN),
Horizon EU project MUSAE (No. 01070421),
2021-SGR-01094 (AGAUR), Icrea Academia’2022
(Generalitat de Catalunya), Robo STEAM (2022-
1-BG01-KA220-VET-000089434, Erasmus+ EU),
DeepSense (ACE053/22/000029, ACCI
´
O), Deep-
FoodVol (AEI-MICINN, PDC2022-133642-I00),
PID2022-141566NB-I00 (AEI-MICINN), Beatriu de
Pin
´
os Programme and the Ministry of Research and
Universities of the Government of Catalonia (2022
BP 00257), and Agencia Nacional de Investigaci
´
on y
Desarrollo de Chile (ANID) (Grant No. FONDECYT
INICIACI
´
ON 11230262).
REFERENCES
Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D.,
Liu, L., Ghavamzadeh, M., Fieguth, P., Cao, X., Khos-
ravi, A., Acharya, U. R., et al. (2021). A review
of uncertainty quantification in deep learning: Tech-
niques, applications and challenges. Information fu-
sion, 76:243–297.
Aguilar, E., Raducanu, B., Radeva, P., and Van de Weijer,
J. (2023). Continual evidential deep learning for out-
of-distribution detection. In ICCV Workshop, pages
3444–3454.
Aljundi, R., Belilovsky, E., Tuytelaars, T., Charlin, L., Cac-
cia, M., Lin, M., and Page-Caccia, L. (2019a). Online
continual learning with maximal interfered retrieval.
In NeurIPS, pages 11849–11860.
Aljundi, R., Lin, M., Goujaud, B., and Bengio, Y. (2019b).
Gradient based sample selection for online continual
learning. In NeurIPS, pages 11816–11825.
Bao, W., Yu, Q., and Kong, Y. (2021). Evidential deep
learning for open set action recognition. In ICCV,
pages 13349–13358.
Borsos, Z., Mutny, M., and Krause, A. (2020). Coresets via
bilevel optimization for continual learning and stream-
ing. In NeurIPS.
Bossard, L., Guillaumin, M., and Van Gool, L. (2014).
Food-101–mining discriminative components with
random forests. In ECCV, pages 446–461. Springer.
Brignac, D., Lobo, N., and Mahalanobis, A. (2023). Im-
proving replay sample selection and storage for less
forgetting in continual learning. In ICCV, pages 3540–
3549.
Coleman, C., Yeh, C., Mussmann, S., Mirzasoleiman, B.,
Bailis, P., Liang, P., Leskovec, J., and Zaharia, M.
(2020). Selection via proxy: Efficient data selection
for deep learning. In ICLR.
Guo, C., Zhao, B., and Bai, Y. (2022). Deepcore: A compre-
hensive library for coreset selection in deep learning.
In International Conference on Database and Expert
Systems Applications, pages 181–195. Springer.
Hao, J., Ji, K., and Liu, M. (2023). Bilevel coreset selection
in continual learning: A new formulation and algo-
rithm. In NeurIPS.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In CVPR, pages
770–778.
Jedlicka, P., Tomko, M., Robins, A., and Abraham, W. C.
(2022). Contributions by metaplasticity to solving
the catastrophic forgetting problem. Trends in Neu-
rosciences, 45(9):656–666.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In NeurIPS, pages 1106–1114.
Lesort, T., Lomonaco, V., Stoian, A., Maltoni, D., Fil-
liat, D., and D
´
ıaz-Rodr
´
ıguez, N. (2020). Continual
learning for robotics: Definition, framework, learning
strategies, opportunities and challenges. Information
fusion, 58:52–68.
Masana, M., Liu, X., Twardowski, B., Menta, M.,
Bagdanov, A. D., and van de Weijer, J. (2023).
Class-incremental learning: Survey and performance
evaluation on image classification. IEEE TPAMI,
45(5):5513–5533.
Nagarajan, B., Bola
˜
nos, M., Aguilar, E., and Radeva, P.
(2023). Deep ensemble-based hard sample mining for
food recognition. Journal of Visual Communication
and Image Representation, 95:103905.
Prabhu, A., Torr, P. H. S., and Dokania, P. K. (2020).
Gdumb: A simple approach that questions our
progress in continual learning. In ECCV, volume
12347 of Lecture Notes in Computer Science, pages
524–540. Springer.
Rebuffi, S., Kolesnikov, A., Sperl, G., and Lampert, C. H.
(2017). icarl: Incremental classifier and represen-
tation learning. In CVPR, pages 5533–5542. IEEE
Computer Society.
Sensoy, M., Kaplan, L., and Kandemir, M. (2018). Evi-
dential deep learning to quantify classification uncer-
tainty. NeurIPS, 31.
Sun, Q., Lyu, F., Shang, F., Feng, W., and Wan, L. (2022).
Exploring example influence in continual learning.
NeurIPS, 35:27075–27086.
Toneva, M., Sordoni, A., des Combes, R. T., Trischler, A.,
Bengio, Y., and Gordon, G. J. (2018). An empirical
study of example forgetting during deep neural net-
work learning. In ICLR.
Wang, L., Zhang, X., Su, H., and Zhu, J. (2024). A compre-
hensive survey of continual learning: theory, method
and application. IEEE TPAMI.
Yoon, J., Madaan, D., Yang, E., and Hwang, S. J. (2022).
Online coreset selection for rehearsal-based continual
learning. In ICLR.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
372