Authors:
Ninad Anklesaria
1
;
Yashvi Malu
1
;
Dhyey Nikalwala
1
;
Urmi Pathak
1
;
Jinal Patel
1
;
Nirali Nanavati
1
;
Preethi Srinivasan
2
and
Arnav Bhavsar
2
Affiliations:
1
Department of Computer Engineering, Sarvajanik College of Engineering & Technology, Surat, India
;
2
School of Computing and Electrical Engineering, Indian Institute of Technology Mandi, Mandi, India
Keyword(s):
MRI, T1 Weighted-image Modality, T2 Weighted-Image Modality, Image Translation, DICOM, U-Net.
Abstract:
The acquisition time for different MRI (Magnetic Resonance Imaging) image modalities pose a unique challenge to the efficient usage of the contemporary radiology technologies. The ability to synthesize one modality
from another can benefit the diagnostic utility of the scans. Currently, all the exploration in the field of medical image to image translation is focused on NIfTI (Neuroimaging Informatics Technology Initiative) images.
However, DICOM (Bidgood et al., 1997) images are the prevalent image standard in MRI centers. Here,
we propose a modified deep learning network based on U-Net architecture for T1-Weighted image (T1WI)
modality to T2-Weighted image (T2WI) modality image to image translation for DICOM images and vice
versa. Our deep learning model exploits the pixel wise features between T1W images and T2W images which
are important to understand the brain structures. The observations indicate better performance of our approach
to the previous state-of-the-art methods. Our a
pproach can help to decrease the acquisition time required for
the scans and thus, also avoid motion artifacts.
(More)