features. The model is then trained on different
numbers of GPUs. When the number of GPUs is
equal to 4, the experiment doesn't have excellent
performance, which may be due to the improper
configuration of the model parallelisation or the
uneven distribution of the data and other problems.
The best results are obtained on two GPUs.
However, a problem common to all models is that
the best test set results tend to occur within 10 training
runs, and as the number of runs increases, the
correctness of the test set decreases. The training set
basically reaches 99% around 80 times. The test set
does not increase with training. This may also require
reducing the complexity of the model.
In conclusion, the use of cross-validation
significantly improves the test accuracy of
MobileNetV2, demonstrating its importance for
improving generalisation. ResNet18 is not yet
proficient enough, and a cleaner model is needed to
improve the accuracy. Furthermore, increasing the
number of GPUs did not always reduce training time
or improve accuracy, suggesting the need to optimise
multi-GPU training strategies.
4 CONCLUSION
The purpose of this study is to investigate whether
parallel computing on GPUs increases the
performance of the trained model. From the results, it
does not, because GPU training also needs to take into
account the transfer of data between GPUs, and the
integration time of the weights across GPUs increases
as the number of GPUs increases. The results do not
get better as the number of GPUs increases, and
overfitting reappears as the number of training
sessions increases. This model is characterised by the
fact that the data is very easy for the model to overfit,
and the increasing complexity of the model is not
friendly to the extraction of features from simple
images. Currently, there is no good classification for
data that is overfitted because of the simplicity of the
images. In the future, further study will try to find out
which part of the model is slowing down the training
process and try to improve the accuracy of the model
by using libraries that allow multi-GPU training or
algorithms that integrate the parameters of different
GPUs. Further study will also try to get a model that
can solve the problem of overfitting images easily.
REFERENCES
Boukaache, A., Benhassıne, N. E., & Boudjehem, D. 2019.
Breast cancer image classification using convolutional
neural networks (CNN) models. International Journal
of Informatics and Applied Mathematics, 6(2), 20-34.
Cao, Z., et al. 2019. An experimental study on breast lesion
detection and classification from ultrasound images
using deep learning architectures. BMC Medical
Imaging, 19, 1-9.
Guo, R., Lu, G., Qin, B., & Fei, B. 2018. Ultrasound
imaging technologies for breast cancer detection and
management: a review. Ultrasound in medicine &
biology, 44(1), 37-70.
Gøtzsche, P. C., & Jørgensen, K. J. 2013. Screening for
breast cancer with mammography. Cochrane database
of systematic reviews, (6).
He, K., et al. 2016. Deep residual learning for image
recognition. Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition.
Kaggle. 2021. Breast ultrasound images dataset. Retrieved
from https://www.kaggle.com/datasets/aryashah2k/bre
ast-ultrasound-images-dataset/code, last accessed time:
April 13, 2024
Ma, Y. 2020. Diagnosis of Benign and Malignant Breast
Lesions in Rats by MRI Plain Scan Combined with
Diffusion-Weighted Imaging. Revista Científica de la
Facultad de Ciencias Veterinarias, 30(5), 2464-2473.
Ouyang, Y., et al. 2019. Classification of benign and
malignant breast tumors using h-scan ultrasound
imaging. Diagnostics, 9(4), 182.
Qiu, Y., Chang, C. S., Yan, J. L., Ko, L., & Chang, T. S.
2019. Semantic segmentation of intracranial
hemorrhages in head CT scans. In 2019 IEEE 10th
International Conference on Software Engineering and
Service Science (ICSESS) (pp. 112-115). IEEE.
Qiu, Y., Wang, J., Jin, Z., Chen, H., Zhang, M., & Guo, L.
2022. Pose-guided matching based on deep learning for
assessing quality of action on rehabilitation
training. Biomedical Signal Processing and
Control, 72, 103323.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen,
L. C. 2018. Mobilenetv2: Inverted residuals and linear
bottlenecks. In Proceedings of the IEEE conference on
computer vision and pattern recognition (pp. 4510-
4520).
Whang, J. S., et al. 2013. The causes of medical malpractice
suits against radiologists in the United States.
Radiology, 266(2), 548-554.