images with multiple views, the effect of the proposed
MapReduce like processing becomes much bigger.
Secondly, the proposed model mainly utilizes two
trained models, YOLOv2 for tooth detection and
CNN for pocket depth estimation. These models are
independently trained on different set of images. We
will try to design an end-to-end model by changing
the current model’s layer composition. This end-to-
end model should be able to simultaneously train the
model instantaneously.
Thirdly, in addition to the oral images, additional
information such as X-ray images and blood test re-
sults should contribute to improving the estimation
accuracy. We will work on designing a model that
can handle a multimodal dataset.
7 CONCLUSION
In this paper, we proposed a MapReduce-like pocket
depth estimation model which performed paral-
lel pocket region extraction processing and multi-
directional information aggregation in a Mapping
phase and Reducing phase, respectively. Through the
experiments, we realize that there is not a strong rela-
tionship between pocket depth and the visual appear-
ance of the pocket. So, it is difficult to judge moder-
ate periodontitis with only oral images. However, we
show the feasibility of screening for finding patients
with severe periodontal disease. The proposed model
can be used for self-check at home as a tool with the
same quality of vision as the dentists.
ACKNOWLEDGMENT
We would like to thank Osaka University Dental Hos-
pital, for setting up the environment for our research
and the medical dataset for the experiments. This
work was supported by Social Smart Dental Hospi-
tal, a collaborative project between Osaka University
and NEC Corp.
REFERENCES
Becker, A. S., Marcon, M., Ghafoor, S., Wurnig, M. C.,
Frauenfelder, T., and Boss, A. (2017). Deep learning
in mammography: diagnostic accuracy of a multipur-
pose image analysis software in the detection of breast
cancer. Investigative radiology, 52(7):434–440.
Burlina, P. M., Joshi, N., Pekala, M., Pacheco, K. D., Fre-
und, D. E., and Bressler, N. M. (2017). Automated
grading of age-related macular degeneration from
color fundus images using deep convolutional neural
networks. JAMA ophthalmology, 135(11):1170–1176.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M.,
Blau, H. M., and Thrun, S. (2017). Dermatologist-
level classification of skin cancer with deep neural net-
works. Nature, 542(7639):115.
Greenstein, G. (1997). Contemporary interpretation of
probing depth assessments: Diagnostic and therapeu-
tic implications. a literature review. Journal of Peri-
odontology, 68(12):1194–1205.
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu,
D., Narayanaswamy, A., Venugopalan, S., Widner, K.,
Madams, T., Cuadros, J., et al. (2016). Development
and validation of a deep learning algorithm for de-
tection of diabetic retinopathy in retinal fundus pho-
tographs. JAMA, 316(22):2402–2410.
Imangaliyev, S., van der Veen, M. H., Volgenant, C. M.,
Keijser, B. J., Crielaard, W., and Levin, E. (2016).
Deep learning for classification of dental plaque im-
ages. In International Workshop on Machine Learn-
ing, Optimization and Big Data, pages 407–410.
Springer.
Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson,
J. P., Kane, A. D., Menon, D. K., Rueckert, D., and
Glocker, B. (2017). Efficient multi-scale 3d cnn with
fully connected crf for accurate brain lesion segmen-
tation. Medical image analysis, 36:61–78.
Khader, Y. S., Dauod, A. S., El-Qaderi, S. S., Alkafajei, A.,
and Batayha, W. Q. (2006). Periodontal status of di-
abetics compared with nondiabetics: a meta-analysis.
Journal of diabetes and its complications, 20(1):59–
68.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. nature, 521(7553):436.
Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A.,
Ciompi, F., Ghafoorian, M., van der Laak, J. A., van
Ginneken, B., and S
´
anchez, C. I. (2017). A survey
on deep learning in medical image analysis. Medical
image analysis, 42:60–88.
Prajapati, S., Nagaraj, R., and Mitra, S. (2017). Classifica-
tion of dental diseases u sing cnn and transfer learning.
In Proceedings of the 5th International Symposium on
Computational and Business Intelligence (ISCBI).
Redmon, J. and Farhadi, A. (2017). YOLO9000: Better,
faster, stronger. In Proceedings of the 30th IEEE Con-
ference on Computer Vision and Pattern Recognition,
pages 6517–6525.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Ting, D. S., Yi, P. H., and Hui, F. (2018). Clinical applica-
bility of deep learning system in detecting tuberculo-
sis with chest radiography. Radiology, 286(2):729.
Wu, T., Trevisan, M., Genco, R. J., Dorn, J. P., Falkner,
K. L., and Sempos, C. T. (2000). Periodontal dis-
ease and risk of cerebrovascular disease: the first na-
tional health and nutrition examination survey and
its follow-up study. Archives of Internal Medicine,
160(18):2749–2755.
Yoshie, H., Itou, K., Murakami, S., and Shin, K. (2013).
Clinical periodontal disease. Ishiyaku Publishing, 2
edition.
A MapReduce-like Deep Learning Model for the Depth Estimation of Periodontal Pockets
395