
careful assessment. Addressing these challenges will
be crucial to validate the effectiveness of STL and en-
sure that it generalizes well across various dermato-
logical datasets.
Looking forward, we aim to investigate whether
this STL approach can be generalized beyond medical
imaging or whether its effectiveness is uniquely suited
to clinical applications. Understanding its adaptabil-
ity to broader contexts could reveal new possibilities
for versatile models capable of handling complex vi-
sual recognition tasks across various domains.
REFERENCES
Azad, R., Aghdam, E. K., Rauland, A., and Bozorgpour, A.
(2024). Medical image segmentation review: The suc-
cess of u-net. IEEE Transactions on Pattern Analysis
and Machine Intelligence.
Cai, T. T. and Ma, R. (2022). Theoretical foundations of
t-sne for visualizing high-dimensional clustered data.
Journal of Machine Learning Research, 23(301):1–
54.
Chan, J. Y.-L., Bea, K. T., Leow, S. M. H., Phoong, S. W.,
and Cheng, W. K. (2023). State of the art: a review of
sentiment analysis based on sequential transfer learn-
ing. Artificial Intelligence Review, 56(1):749–780.
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and
Adam, H. (2018). Encoder-decoder with atrous sepa-
rable convolution for semantic image segmentation. In
Proceedings of the European conference on computer
vision (ECCV), pages 801–818.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M.,
Blau, H. M., and Thrun, S. (2017). Dermatologist-
level classification of skin cancer with deep neural net-
works. nature, 542(7639):115–118.
Gallazzi, M., Biavaschi, S., Bulgheroni, A., Gatti, T.,
Corchs, S., and Gallo, I. (2024). A large dataset to
enhance skin cancer classification with transformer-
based deep neural networks. IEEE Access.
Gallo, I., Rehman, A. U., Dehkordi, R. H., Landro, and
Nicola (2023). Deep object detection of crop weeds:
Performance of yolov7 on a real case dataset from uav
images. Remote Sensing, 15(2):539.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
ISIC Challenge (Last accessed on 30-10-2024). Isic chal-
lenge webpage. https://challenge.isic-archive.com.
Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018). Path ag-
gregation network for instance segmentation. In Pro-
ceedings of the IEEE conference on computer vision
and pattern recognition, pages 8759–8768.
Liu, Z., Lin, Y., Cao, Y., and Hu (2021). Swin trans-
former: Hierarchical vision transformer using shifted
windows. In Proceedings of the IEEE/CVF interna-
tional conference on computer vision, pages 10012–
10022.
Mao, H. H. (2020). A survey on self-supervised pre-training
for sequential transfer learning in neural networks.
arXiv preprint arXiv:2007.00800.
O’Shea, K. (2015). An introduction to convolutional neural
networks. arXiv preprint arXiv:1511.08458.
Paulsen, S. and Casey, M. (2023). Sequential transfer learn-
ing to decode heard and imagined timbre from fmri
data. arXiv preprint arXiv:2305.13226.
Redmon, J. (2016). You only look once: Unified, real-time
object detection. In Proceedings of the IEEE confer-
ence on computer vision and pattern recognition.
Rehman, A. U. and Gallo, I. (2024). Cross-pollination of
knowledge for object detection in domain adaptation
for industrial automation. International Journal of In-
telligent Robotics and Applications, pages 1–19.
Rehman, A. U., Gallo, I., and Lorenzo, P. (2023). A
food package recognition framework for enhancing
efficiency leveraging the object detection model. In
2023 28th International Conference on Automation
and Computing (ICAC), pages 1–6. IEEE.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-
net: Convolutional networks for biomedical image
segmentation. In Medical image computing and
computer-assisted intervention–MICCAI 2015: 18th
international conference, Germany, 2015, proceed-
ings, part III 18, pages 234–241. Springer.
Taghizadeh, M. and Mohammadi, K. (2022). The fast and
accurate approach to detection and segmentation of
melanoma skin cancer using fine-tuned yolov3 and
segnet based on deep transfer learning. arXiv preprint
arXiv:2210.05167.
Tirinzoni, A., Poiani, R., and Restelli, M. (2020). Sequen-
tial transfer in reinforcement learning with a genera-
tive model. In International Conference on Machine
Learning, pages 9481–9492. PMLR.
Tschandl, P. and Rosendahl (2018). The ham10000 dataset,
a large collection of multi-source dermatoscopic im-
ages of skin lesions. Scientific data, 5(1):1–9.
Ultralytics (2024). Yolov8 release notes. https://github.
com/ultralytics/yolov8. Available: https://github.com/
ultralytics/yolov8.
Wang, C.-Y., Liao, H.-Y. M., and Wu, Y.-H. (2020). Csp-
net: A new backbone that can enhance learning capa-
bility of cnn. In Proceedings of the IEEE/CVF con-
ference on computer vision and pattern recognition
workshops, pages 390–391.
Wang, H., Xie, S., Lin, L., and Iwamoto (2022). Mixed
transformer u-net for medical image segmentation. In
ICASSP 2022-2022 IEEE international conference on
acoustics, speech and signal processing (ICASSP),
pages 2390–2394. IEEE.
Wang, Y., Su, J., Xu, Q., and Zhong, Y. (2023). A collabo-
rative learning model for skin lesion segmentation and
classification. Diagnostics, 13(5):912.
Improving Classification in Skin Lesion Analysis Through Segmentation
703