process the used data. We will present our proposed
deep learning model architectures in the fourth sec-
tion. Afterward, we will report our findings and com-
pare them to previous research. Finally, we conclude
by providing some insight into our future research.
2 RELATED WORKS
In the last few recent years, There has been many
research aiming to assist radiologists in appropri-
ately segmenting, locating, and identifying clinically
significant prostate cancer. We can mention (Liu
et al., 2017) used XmasNet, a CNN-based classifier
to categorize mp-MRI prostate cancer lesions on the
PROSTATEx dataset. With an AUC of 0.84, Xmas-
Net surpassed all typical machine learning models for
training and testing data. (Vente and Vos, 2021), on
the other hand, used 2D U-Net with MRI slices as
input and lesion segmentation maps that encode the
GGG By achieving a voxel-wise weighted kappa of
0.446% and a Dice similarity coefficient of 0.37%,
the model beats traditional multi-class classification
and multi-label ordinal regression. (Lehaire, 2016),
on the other hand, used their data, which included a
49-patient mp-MRI database and SVM-L and logis-
tic regression, to construct a CADe that generated a
cancer probability map for the radiologist. The draw-
back was that they built using the dictionary learning
method, and the dictionaries were approximated from
the features of previously retrieved images. Although
their results were statistically superior to the other
CADe diagrams, this difference needed to be more
evident on the sample probability maps. Meanwhile,
(Yunzhi Wang and Wang, 2018) demonstrated in
2018, the mp-MRI-based segmentation scheme out-
performs earlier T2W-based schemes by using a state-
of-the-art Fully Convolutional Network (FCN) archi-
tecture with residual connections to segment prostate
mp-MRI. (To et al., 2018), on the other hand, used a
3D deep, dense multi-path Convolutional Neural Net-
work (CNN) based on the encoder-decoder architec-
ture to segment the prostate in MRI images on two
distinct datasets. The encoder is composed of densely
connected layers, and the decoder interprets the fea-
tures and predicts the total prostate volume.
The developers of (Dai Z, 2020) used the
PROSTATEx dataset in their two-stage approach in
2019. They used a poorly supervised deep neural
network to detect and classify lesions after training
a Mask R-CNN model to segment the prostate struc-
tures automatically. On their validation set, this work
attained an average AUC of 0.912% and 0.882%.
(Zhenzhen Dai, 2019) employed Mask-RCNN for
prostate segmentation and Dominant Intra-prostatic
Lesion (DIL) segmentation and localization using
Mp-MRI in the same scenario. Furthermore, (Karimi
et al., 2019) trained two different CNNs to produce a
global CNN to create an automatic prostate segmenta-
tion method in T2-weighted images (T2w) in prostate
MRI sequence
Furthermore, in (Stahl, 2020), the authors used
a Genetic Algorithm to fine-tune a trained CNN for
PCa detection to get a higher AUC on their 6-channel
diffusion-weighted prostate MRI dataset. On their
test dataset, this work yielded an AUC of 0.707. (Yoo
et al., 2019) also created and implemented an auto-
mated CNN-based process to detect clinically signifi-
cant PCa in axial DWI images for each patient.
The authors of (Neto, 2020) published a deep
learning-based study of PCa via mp-MRI in 2020.
They experimented with numerous models, including
3D U-net, 3D Res-Net, and XmasNet. The outcome
was a DICE value of roughly 0.69, whereas other
researchers received a DICE score of 0.83. Mean-
while, (Liu, 2020) suggested a Pytorch V-Net deep
learning framework architecture on volumetric CNN
by borrowing the U-net network and dividing it into
residual stages of learning to conduct rapid and ac-
curate MRI prostate volume segmentation. (Nuhi
´
c
and Kevric, 2020) developed a novel PCa clinical
management prototype using nine classification al-
gorithms in a PCa database. The classification rate
for authors is 98.71%. For the AdaBoost classi-
fier, sensitivity was around 97.4%, while specificity
was perfect and equivalent to 100%. Other algorithms
(Nave Bayes, Multi-layer Perception, Simple Logis-
tics, Nearest Neighbor, Random Committee, PART,
LMT, and Random Forest) also produced excellent re-
sults.
The authors of (Andrew Hwang, 2021) trained and
evaluated 13 different CNN architectures (binary clas-
sification models) on Raudboundumc MRI scans of
204 patients suspected of having prostate cancer, the
same one we utilized in our study. The best model
has an accuracy of 86.9% and an area under the ROC
(AUROC) of 90.3% percent. (B. Liu, 2021) pro-
posed an improved 2D U-Net model with an included
Squeeze-and-Excitation (SE) layer for prostate seg-
mentation using the public dataset PROMISE12 in
the same year (2021). The model is built around
an encoder stage that uses CONV blocks, SE lay-
ers, and max-pooling layers to extract features from
the input and a decoder stage that maps the returned
features to the original image. Experiments demon-
strated that the suggested model might outperform
other approaches in terms of segmentation accuracy
and DSC, with a mean DSC of 87%.
Prostate Cancer Detection, Segmentation, and Classification using Deep Neural Networks
535