mentation on BALD heuristic provides a robust and
efficient heuristic for sample selection. It not only
selects more uncertain samples at each AL step, but
also improves the heuristic function’s stability, sub-
sequently leading to improved label efficiency. With
only 60% of the samples, we reach the same accuracy
as a supervised training with the full selected subset.
The computing time gained by training the model on
the AL-selected subset from AL w.r.t training on the
original dataset could help gain few days to weeks.
Thus data augmentation within AL frameworks have
helped in reducing annotation costs as well as reduc-
ing training time in production over large datasets.
ACKNOWLEDGEMENTS
This work was granted access to HPC resources
of [TGCC/CINES/IDRIS] under the allocation
2021- [AD011012836] made by GENCI (Grand
Equipement National de Calcul Intensif). It is also
part of the Deep Learning Segmentation (DLS)
project financed by ADEME.
REFERENCES
Aghdam, H. H., Gonzalez-Garcia, A., Weijer, J. v. d., and
López, A. M. (2019). Active learning for deep detec-
tion neural networks. In Proceedings of the IEEE/CVF
International Conference on Computer Vision, pages
3672–3680.
Allingham, J. U., Wenzel, F., Mariet, Z. E., Mustafa,
B., Puigcerver, J., Houlsby, N., Jerfel, G., Fortuin,
V., Lakshminarayanan, B., Snoek, J., et al. (2021).
Sparse moes meet efficient ensembles. arXiv preprint
arXiv:2110.03360.
Anselmi, F., Rosasco, L., and Poggio, T. (2016). On
invariance and selectivity in representation learning.
Information and Inference: A Journal of the IMA,
5(2):134–158.
Ash, J. T., Zhang, C., Krishnamurthy, A., Langford, J., and
Agarwal, A. (2020). Deep batch active learning by
diverse, uncertain gradient lower bounds.
Atighehchian, P., Branchaud-Charron, F., Freyberg, J., Par-
dinas, R., and Schell, L. (2019). Baal, a bayesian
active learning library. https://github.com/ElementAI/
baal/.
Atighehchian, P., Branchaud-Charron, F., and Lacoste, A.
(2020). Bayesian active learning for production, a sys-
tematic study and a reusable library.
Beck, N., Sivasubramanian, D., Dani, A., Ramakrishnan,
G., and Iyer, R. (2021). Effective evaluation of deep
active learning on image classification tasks.
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke,
S., Stachniss, C., and Gall, J. (2019). Semantickitti:
A dataset for semantic scene understanding of lidar
sequences.
Birodkar, V., Mobahi, H., and Bengio, S. (2019). Seman-
tic redundancies in image-classification datasets: The
10don’t need.
Brostow, G. J., Fauqueur, J., and Cipolla, R. (2009). Seman-
tic object classes in video: A high-definition ground
truth database. Pattern Recognit. Lett., 30:88–97.
Buslaev, A., Iglovikov, V. I., Khvedchenya, E., Parinov, A.,
Druzhinin, M., and Kalinin, A. A. (2020). Albumen-
tations: Fast and flexible image augmentations. Infor-
mation, 11(2).
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E.,
Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Bei-
jbom, O. (2020). nuscenes: A multimodal dataset for
autonomous driving.
Chitta, K., Alvarez, J. M., Haussmann, E., and Farabet, C.
(2019). Training data subset search with ensemble ac-
tive learning. arXiv preprint arXiv:1905.12737.
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler,
M., Benenson, R., Franke, U., Roth, S., and Schiele,
B. (2016). The cityscapes dataset for semantic urban
scene understanding.
Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian
approximation: Representing model uncertainty in
deep learning.
Gal, Y., Islam, R., and Ghahramani, Z. (2017). Deep
bayesian active learning with image data.
Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M.,
Feng, J., Kruspe, A., Triebel, R., Jung, P., Roscher, R.,
Shahzad, M., Yang, W., Bamler, R., and Zhu, X. X.
(2021). A survey of uncertainty in deep neural net-
works.
Golestaneh, S. A. and Kitani, K. M. (2020). Importance of
self-consistency in active learning for semantic seg-
mentation.
Guo, J., Pang, Z., Sun, W., Li, S., and Chen, Y. (2021).
Redundancy removal adversarial active learning based
on norm online uncertainty indicator. Computational
Intelligence and Neuroscience, 2021.
Hong, S., Ha, H., Kim, J., and Choi, M.-K. (2020). Deep
active learning with augmentation-based consistency
estimation. arXiv preprint arXiv:2011.02666.
Houlsby, N., Huszár, F., Ghahramani, Z., and Lengyel, M.
(2011). Bayesian active learning for classification and
preference learning.
Hüllermeier, E. and Waegeman, W. (2021). Aleatoric and
epistemic uncertainty in machine learning: An intro-
duction to concepts and methods. Machine Learning,
110(3):457–506.
Kendall, A. and Gal, Y. (2017). What uncertainties do we
need in bayesian deep learning for computer vision?
Kirsch, A., van Amersfoort, J., and Gal, Y. (2019). Batch-
bald: Efficient and diverse batch acquisition for deep
bayesian active learning.
Lakshminarayanan, B., Pritzel, A., and Blundell, C.
(2016). Simple and scalable predictive uncertainty
estimation using deep ensembles. arXiv preprint
arXiv:1612.01474.
VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications
166