Authors:
Genta Ishikawa
1
;
Rong Xu
2
;
Jun Ohya
1
and
Hiroyasu Iwata
1
Affiliations:
1
Department of Modern Mechanical Engineering, Waseda University,3-4-1, Ookubo, Shinjuku-Ku, Tokyo and Japan
;
2
Global Information and Telecommunication Institute, Waseda University,3-4-1, Ookubo, Shinjuku-Ku, Tokyo and Japan
Keyword(s):
Fetal, Ultrasound Image, Deep Leaning, Grad_CAM, Fetal Position.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Feature Selection and Extraction
;
Medical Imaging
;
Pattern Recognition
;
Robotics
;
Software Engineering
;
Theory and Methods
Abstract:
In this paper, we propose an automatic method for estimating fetal position based on classification and detection of different fetal parts in ultrasound images. Fine tuning is performed in the ultrasound images to be used for fetal examination using CNN, and classification of four classes "head", "body", "leg" and "other" is realized. Based on the obtained learning result, binarization that thresholds the gradient of the feature obtained by Grad Cam is performed in the image so that a bounding box of the region of interest with large gradient is extracted. The center of the bounding box is obtained from each frame so that the trajectory of the centroids is obtained; the position of the fetus is obtained as the trajectory. Experiments using 2000 images were conducted using a fetal phantom. Each recall ratiso of the four class is 99.6% for head, 99.4% for body, 99.8% for legs, 72.6% for others, respectively. The trajectories obtained from the fetus present in “left”, “center”, “right”
in the images show the above-mentioned geometrical relationship. These results indicate that the estimated fetal position coincides with the actual position very well, which can be used as the first step for automatic fetal examination by robotic systems.
(More)