Authors:
Vijay Kumar
1
;
Het Patel
1
;
Kolin Paul
1
;
Abhidnya Surve
2
;
Shorya Azad
2
and
Rohan Chawla
2
Affiliations:
1
Khosla School of Information Technology, Indian Institute of Technology, Delhi, India
;
2
Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, Delhi, India
Keyword(s):
Fundus Image, Retinopathy of Prematurity (ROP), Computer Aided Diagnosis (CAD), Generative Adversarial Network (GAN), U-Net, Blood Vessels Segmentation, Deep Convolutional Neural Network (DCNN).
Abstract:
Retinopathy of prematurity (ROP) is the leading cause of blindness in premature babies worldwide. ROP is quantified through the structural information of the retinal vessel map, such as vessels width, tortuosity and extent. Therefore, the accuracy of quantitative studies depends on the quality of the segmented blood vessels’ map. Fundus images used for neonatal eye examination are prone to many artefacts and noises due to patient movement, erratic illumination, improperly focused camera sensor, etc. Existing vessel segmentation algorithms work well on retinal images of adults but fail to detect underdeveloped vessel structures of neonatal’s fundus images. At the same time, the unavailability of fundus images of infants has hindered the development of the data-driven methods for vessel segmentation. This work proposes a new Deep Convolutional Neural Network (DCNN) based vessels segmentation system for the screening for the neonatal eye disorder ROP. The proposed system uses a DCNN, Ge
nerative Adversarial Network (GAN) Pix2Pix or U-Net for vessel segmentation. Using publicly available fundus image datasets, we used an efficient and robust training procedure for the proposed system and tested them with preterm neonatal eye images from a local hospital. Experimental results show that the proposed system allows better screening of ROP with robustness to noises and inter-class variation. It has achieved an average accuracy of 96.69% for vessels segmentation and a Dice coefficient between 0.60 and 0.64. Our system is able to achieve an accuracy of 88.23% for Zone-1 case of ROP.
(More)