Authors:
Yusuke Moriyama
1
;
Chonho Lee
2
;
Susumu Date
2
;
Yoichiro Kashiwagi
3
;
Yuki Narukawa
3
;
Kazunori Nozaki
4
and
Shinya Murakami
3
Affiliations:
1
Graduate School of Information Science and Technology, Osaka University, Osaka and Japan
;
2
Cybermedia Center, Osaka University, Osaka and Japan
;
3
Graduate School of Dentistry, Osaka University, Osaka and Japan
;
4
Osaka University Dental Hospital, Osaka and Japan
Keyword(s):
Periodontal Disease, Periodontal Pocket, Convolutional Neural Networks, Deep Learning, Object Detection.
Related
Ontology
Subjects/Areas/Topics:
Biomedical Engineering
;
Health Information Systems
;
Pattern Recognition and Machine Learning
;
Pervasive Health Systems and Services
Abstract:
This paper explores the feasibility of diagnostic imaging using a deep learning-based model, applicable to periodontal disease, especially periodontal pocket screening. Having investigated conventional approaches, we find two difficulties to estimate the pocket depth of teeth from oral images. One is the feature extraction of Region of Interest (ROI), which is pocket region, caused by the small ROI, and another is tooth identification caused by the high heterogeneity of teeth (e.g., in size, shape, and color). We propose a MapReduce-like periodontal pocket depth estimation model that overcomes the difficulties. Specifically, a set of MapTasks is executed in parallel, each of which only focuses on one of the multiple views (e.g., front, left, right, etc.) of oral images and runs an object detection model to extract the high-resolution pocket region images. After a classifier estimates pocket depth from the extracted images, ReduceTasks aggregate the pocket depth with respect to each p
ocket. Experimental results show that the proposed model effectively works to achieve the estimation accuracy to 76.5 percent. Besides, we verify the practical feasibility of the proposed model with 91.7 percent accuracy under the condition that a screening test judges severe periodontitis (6 mm or more).
(More)