tends to exceed the labeled area, while the results of
cardiac cavity segmentation on the RESNET-18
architecture have a segmentation area pattern that
tends to reduce the area that has been labeled. The
VGG-16 has a higher average accuracy at each point
of view, which is about 83% -93% than the RESNET-
18 architecture with an average accuracy of around
76% -92%. The heart point of view on the VGG-16
architecture which has the highest average accuracy
is the long axis and the lowest in the short axis, while
on the ResNet architecture the highest is 4 chambers
and the lowest in the long axis.
We have presented a multidimensional
echocardiography image segmentation using deep
learning such as a convolutional neural network with
only information from echocardiography image
which is converted from the video are able to solve
the existing problem such as, helps doctors determine
the cardiac cavity when examining cardiac patients
from echocardiography used in Indonesia. Those
existing works that use features that are manually
designed as input to a system to perform
segmentation are seemed rather difficult to
implement in multidimensional echocardiography
image segmentation due to its difference and
uncertain quality video. The parameters were studied
from the training data but the video produced was
based on each echocardiography device in the
hospital and the video taking when the heart
examination was also carried out by people who
sometimes differed from different patients every day
made the video results from echocardiography also
differed in quality. So it seems a little difficult to
implement at this time.
REFERENCES
Sigit, R., Roji, C. A., Harsono, T., & Kuswadi, S. (2019).
Improved echocardiography segmentation using active
shape model and optical flow. TELKOMNIKA
(Telecommunication Computing Electronics and
Control), 17(2), 809-818.
Sigit, R., & Saleh, S. N. (2017, October). Improved
segmentation of cardiac image using active shape
model. In 2017 International Seminar on Application
for Technology of Information and Communication
(iSemantic) (pp. 209-214). IEEE.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning.
nature, 521 (7553), 436-444. Google Scholar Google
Scholar Cross Ref Cross Ref.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012).
Imagenet classification with deep convolutional neural
networks. Advances in neural information processing
systems, 25.
Simonyan, K., & Zisserman, A. (2014). Very deep
convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556.
Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2012).
Learning hierarchical features for scene labeling. IEEE
transactions on pattern analysis and machine
intelligence, 35(8), 1915-1929.
Graves, A., Liwicki, M., Fernández, S., Bertolami, R.,
Bunke, H., & Schmidhuber, J. (2008). A novel
connectionist system for unconstrained handwriting
recognition. IEEE transactions on pattern analysis and
machine intelligence, 31(5), 855-868.
Mohamed, A. R., Dahl, G. E., & Hinton, G. (2011).
Acoustic modeling using deep belief networks. IEEE
transactions on audio, speech, and language
processing, 20(1), 14-22.
Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE
international conference on computer vision (pp. 1440-
1448).
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014).
Rich feature hierarchies for accurate object detection
and semantic segmentation. In Proceedings of the IEEE
conference on computer vision and pattern recognition
(pp. 580-587).
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2015).
Region-based convolutional networks for accurate
object detection and segmentation. IEEE transactions
on pattern analysis and machine intelligence, 38(1),
142-158.
Felzenszwalb, P. F., Girshick, R. B., McAllester, D., &
Ramanan, D. (2010). Object detection with
discriminatively trained part-based models. IEEE
transactions on pattern analysis and machine
intelligence, 32(9), 1627-1645.
Imaduddin, H., Anwar, M. K., Perdana, M. I., Sulistijono,
I. A., & Risnumawan, A. (2018, October). Indonesian
vehicle license plate number detection using deep
convolutional neural network. In 2018 International
Electronics Symposium on Knowledge Creation and
Intelligent Computing (IES-KCIC) (pp. 158-163).
IEEE.
World Health Organization (WHO). (2019). The state of
food security and nutrition in the world 2019:
safeguarding against economic slowdowns and
downturns (Vol. 2019). Food & Agriculture Org.