Image and Text Feature Based Multimodal Learning for Multi-Label Classification of Radiology Images in Biomedical Literature

Md. Hasan, Md Jani, Md Rahman

2024

Abstract

Biomedical images are crucial for diagnosing and planning treatments, as well as advancing scientific understanding of various ailments. To effectively highlight regions of interest (RoIs) and convey medical concepts, annotation markers like arrows, letters, or symbols are employed. However, annotating these images with appropriate medical labels poses a significant challenge. In this study, we propose a framework that leverages multimodal input features, including text/label features and visual features, to facilitate accurate annotation of biomedical images with multiple labels. Our approach integrates state-of-the-art models such as ResNet50 and Vision Transformers (ViT) to extract informative features from the images. Additionally, we employ Generative Pre-trained Distilled-GPT2 (Transformer based Natural Language Processing architecture) to extract textual features, leveraging their natural language understanding capabilities. This combination of image and text modalities allows for a more comprehensive representation of the biomedical data, leading to improved annotation accuracy. By combining the features extracted from both image and text modalities, we trained a simplified Convolutional Neural Network (CNN) based multi-classifier to learn the image-text relations and predict multi-labels for multi-modal radiology images. We used ImageCLEFmedical 2022 and 2023 datasets to demonstrate the effectiveness of our framework. This dataset likely contains a diverse range of biomedical images, enabling the evaluation of the framework’s performance under realistic conditions. We have achieved promising results with the F1 score of 0.508. Our proposed framework exhibits potential performance in annotating biomedical images with multiple labels, contributing to improved image understanding and analysis in the medical image processing domain.

Download


Paper Citation


in Harvard Style

Hasan M., Jani M. and Rahman M. (2024). Image and Text Feature Based Multimodal Learning for Multi-Label Classification of Radiology Images in Biomedical Literature. In Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF; ISBN 978-989-758-688-0, SciTePress, pages 679-686. DOI: 10.5220/0012438400003657


in Bibtex Style

@conference{healthinf24,
author={Md. Hasan and Md Jani and Md Rahman},
title={Image and Text Feature Based Multimodal Learning for Multi-Label Classification of Radiology Images in Biomedical Literature},
booktitle={Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF},
year={2024},
pages={679-686},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012438400003657},
isbn={978-989-758-688-0},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF
TI - Image and Text Feature Based Multimodal Learning for Multi-Label Classification of Radiology Images in Biomedical Literature
SN - 978-989-758-688-0
AU - Hasan M.
AU - Jani M.
AU - Rahman M.
PY - 2024
SP - 679
EP - 686
DO - 10.5220/0012438400003657
PB - SciTePress