Figure 7: Testing accuracy and loss with learning rate
= 0.001, other
parameters’
values are the same as the
parameters set (Photo/Picture credit : Original).
When the learning rate = 0.001, the accuracy
dropped a lot and the performance of stability and
convergence also dropped. However, in the first few
epochs, this model quickly reached a higher accuracy
than the model of 0.0001 learning rate. Also, it was
about to converge earlier but did not keep converging.
A larger learning rate is not suitable for training
such detailed medical data, and it is easy to skip the
details and achieve the wrong classification. On the
contrary, a smaller learning rate can have better
accuracy and convergence because it can focus on
more details of the images and use these details to do
the right classification.
4 CONCLUSIONS
This article applies Federated Learning to the MRI
dataset, aiming to improve data privacy. Combining
the EfficientNet-B0 and FedAvg Algorithm, the study
developed a flexible and secure classification method
compared with recent methods. Through
experiments, the study found the best
hyperparameters to train the model with high
accuracy and fast convergence. Furthermore, the
study compared the performance of different CNN
models to demonstrate the advantages of the
combination. In terms of future study, heterogeneity
of the data is a big deal, how to further combine a
good method to improve the accuracy in more
heterogenous data will be an important research
direction. Also, the method should be tested through
other complex datasets.
REFERENCES
AbdulRahman, S., Tout, H., Ould-Slimane, H., Mourad, A.,
Talhi, C., & Guizani, M. 2020. A survey on federated
learning: The journey from centralized to distributed
on-site learning and beyond. IEEE Internet of Things
Journal, 8(7), 5476-5497.
Bhuvaji, S., Kadam, A., Bhumkar, P., & Dedge, S. 2020.
Brain Tumor Classification (MRI). Kaggle.
https://www.kaggle.com/datasets/sartajbhuvaji/brain-
tumor-classification-mri/data
Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., & Le, Q.
V. 2018. Autoaugment: Learning augmentation policies
from data. arXiv preprint arXiv:1805.09501.
He, K., Zhang, X., Ren, S., & Sun, J. 2016. Deep residual
learning for image recognition. In Proceedings of the
IEEE conference on computer vision and pattern
recognition (pp. 770-778).
Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich,
S., & Suresh, A. T. 2020. Scaffold: Stochastic
controlled averaging for federated learning. In
International conference on machine learning (pp.
5132-5143). PMLR.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis,
M., Bhagoji, A. N., ... & Zhao, S. 2021. Advances and
open problems in federated learning. Foundations and
trends® in machine learning, 14(1–2), 1-210.
Li, S., Kou, P., Ma, M., Yang, H., Huang, S., & Yang, Z.
2024. Application of Semi-supervised Learning in
Image Classification: Research on Fusion of Labeled
and Unlabeled Data. IEEE Access.
Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A.,
& Smith, V. 2020. Federated optimization in
heterogeneous networks. Proceedings of Machine
learning and systems, 2, 429-450.
Liu, Y., Yang, H., & Wu, C. 2023. Unveiling patterns: A
study on semi-supervised classification of strip surface
defects. IEEE Access, 11, 119933-119946.
Loshchilov, I., & Hutter, F. 2017. Decoupled weight decay
regularization. arXiv preprint arXiv:1711.05101.
McMahan, B., Moore, E., Ramage, D., Hampson, S., & y
Arcas, B. A. 2017. Communication-efficient learning
of deep networks from decentralized data. In Artificial
intelligence and statistics (pp. 1273-1282). PMLR.
Qiu, Y., Hui, Y., Zhao, P., Cai, C. H., Dai, B., Dou, J., ... &
Yu, J. 2024. A novel image expression-driven modeling
strategy for coke quality prediction in the smart
cokemaking process. Energy, 294, 130866.
Qiu, Y., Wang, J., Jin, Z., Chen, H., Zhang, M., & Guo, L.
2022. Pose-guided matching based on deep learning for
assessing quality of action on rehabilitation training.
Biomedical Signal Processing and Control, 72, 103323.
Simonyan, K., & Zisserman, A. 2014. Very deep
convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556.
Tan, M., & Le, Q. 2019. Efficientnet: Rethinking model
scaling for convolutional neural networks. In
International conference on machine learning (pp.
6105-6114). PMLR.
Wang, J., Liu, Q., Liang, H., Joshi, G., & Poor, H. V. 2020.
Tackling the objective inconsistency problem in
heterogeneous federated optimization. Advances in
neural information processing systems, 33, 7611-7623.