Authors:
Joan Jonathan Mnyambo
1
;
Amir Aly
1
;
Shang-Ming Zhou
2
;
Yinghui Wei
1
;
Stephen Mullin
3
and
Emmanuel Ifeachor
1
Affiliations:
1
School of Engineering, Computing, and Mathematics, University of Plymouth, Plymouth, U.K.
;
2
School of Nursing and Midwifery, University of Plymouth, Plymouth, U.K.
;
3
Peninsula Medical School, University of Plymouth, Plymouth, U.K.
Keyword(s):
Tuberculosis, Drug Resistance, Deep Learning, Vision Transformer, Data-Efficient Image Transformer, Transfer Learning, Chest X-Rays.
Abstract:
Tuberculosis is an infectious disease with increasing fatalities around the world. The diagnosis of the disease is a major challenge to its control and management due to the lack of adequate diagnostic tools, contributing significantly to the prevalence of drug-resistant tuberculosis. Convolutional Neural Network (CNN) models have recently been developed to detect drug-resistant tuberculosis by analyzing chest radiograph images from the TB portal, but the classification results are low. This is because CNNs struggle to capture complex global and overlapping features in medical imaging, such as chest radiographs of drug-resistant tuberculosis. In contrast, transformers excel in these areas by utilizing self-attention mechanisms that detect inherent subtle and long-range dependencies across images. In this study, we used a pretrained data-efficient image transformer (DEiT) model to enhance the diagnosis of drug-resistant tuberculosis and differentiate it from drug-sensitive tuberculosi
s. The new model achieved an AUC of 80% in the detection of drug-resistant tuberculosis, an improvement of 13% in the AUC compared to current CNN models using data from the same source. The bootstrap significance test shows that the difference in AUCs is statistically significant. The results of the study can help healthcare providers improve drug-resistant tuberculosis diagnostic accuracy and treatment outcomes.
(More)