Authors:
Fabian Gröger
1
;
Philippe Gottfrois
2
;
Ludovic Amruthalingam
2
;
Alvaro Gonzalez-Jimenez
2
;
Simone Lionetti
1
;
Alexander Navarini
2
;
3
and
Marc Pouly
1
Affiliations:
1
Lucerne University of Applied Sciences and Arts, Rotkreuz, Switzerland
;
2
Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
;
3
Department of Dermatology, University Hospital of Basel, Switzerland
Keyword(s):
Self-supervised Learning, Pre-training, Transfer Learning, Dermatology, Medical Imaging.
Abstract:
Training supervised models requires large amounts of labelled data, whose creation is often expensive and time-consuming, especially in the medical domain. The standard practice to mitigate the lack of annotated clinical images is to use transfer learning and fine-tune pre-trained ImageNet weights on a downstream task. While this approach achieves satisfactory performance, it still requires a sufficiently large dataset to adjust the global features for a specific task. We report on an ongoing investigation to determine whether self-supervised learning methods applied to unlabelled domain-specific images can provide better representations for digital dermatology compared to ImageNet. We consider ColorMe, SimCLR, BYOL, DINO, and iBOT, and present preliminary results on the evaluation of pre-trained initialization for three different medical tasks with mixed imaging modalities. Our intermediate findings indicate a benefit in using features learned by iBOT on dermatology datasets compare
d to conventional transfer learning from ImageNet classification.
(More)