7 DISCUSSION AND ANALYSIS
By learning models in a distributed environment,
model training can be achieved without centralizing
data. Communication efficiency is crucial, especially
when learning on mobile devices and reducing
communication rounds is vital for performance
improvement. Different technologies and methods,
such as iterative model averaging, model accuracy
checks, and model alert mechanisms, can be
employed. Future research could explore the
applicability of these methods in broader and more
complex scenarios, as well as how to enhance model
robustness and privacy protection performance
further.
In the field of FL, there is a need for more
attention to comprehensive optimization methods that
address communication efficiency, security, and
model performance simultaneously.
8 CONCLUSION
In the field of FL, the technology to address the issue
of data silos has garnered significant attention.
Despite having certain privacy protection
mechanisms, FL still poses risks of privacy leakage,
especially in sectors such as healthcare and finance,
where the demand for user privacy protection is
urgent. The paper reviews the fundamental principles,
classifications, and privacy challenges of FL, with a
particular focus on privacy threats like Byzantine
attacks, poisoning attacks, and Sybil attacks.
Regarding privacy protection, researchers have
proposed various methods, including homomorphic
encryption, differential privacy, and data
compression technologies. Homomorphic encryption
enables computational operations on encrypted data,
effectively safeguarding the privacy of model
parameters and input data. Differential privacy
protects data privacy on local devices by introducing
noise and prevents overreliance on individual models
by introducing noise on model parameters. Data
compression technology enhances communication
efficiency by reducing the amount of transmitted data
while maintaining the accuracy of model training.
In the comparative analysis of privacy protection
algorithms, Siren employs an active alert mechanism,
edge computing privacy protection combines
blockchain technology, and the FLAME framework
integrates differential privacy with FL. These
methods not only enhance model accuracy but also
effectively counter various types of privacy attacks.
Overall, as a distributed machine learning
approach, FL faces challenges in the comprehensive
optimization of communication efficiency, security,
and model performance. Future research should delve
into the applicability of these methods in broader and
more complex scenarios to further enhance the
robustness and privacy protection performance of FL.
REFERENCES
A. N. Bhagoji, S. Chakraborty, P. Mittal, et al. Analyzing
federated learning through an adversarial lens. In
International Conference on Machine Learning,
(2019), pp. 634-643.
C. Dwork. Communications of the ACM, 54(1), 86-95,
(2011).
C. Fang, Y. Zheng, Y. Wang, et al. Journal of
Communications, 42(11), 28-40, (2021).
C. Zhou, Y. Sun, D. Wang, et al. Journal of Network and
Information Security, 7(5), 77-92, (2021).
D. A. E. Acar, Y. Zhao, R. M. Navarro, et al. arXiv preprint
arXiv:2111.04263, (2021).
E. Bagdasaryan, A. Veit, Y. Hua, et al. “How to backdoor
federated learning”. In International conference on
artificial intelligence and statistics, (2020), pp. 2938-
2948.
F. Sattler, S. Wiedemann, K. R. Müller, et al. IEEE
transactions on neural networks and learning systems,
31(9), 3400-3413, (2019).
H. Guo, H. Wang, T. Son, et al. “Siren: Byzantine-robust
federated learning via proactive alarming”. In
Proceedings of the ACM Symposium on Cloud
Computing, (2021), pp. 47-60.
H. Wang, M. Yurochkin, Y. Sun, et al. arXiv preprint
arXiv:2002.06440, (2020).
IEEE Computer Society. “IEEE Guide for Architectural
Framework and Application of Federated Machine
Learning.” in IEEE Std 3652.1-2020, (2021), pp.1-6.
J. Chen, J. Chu, M. Su, et al. Journal of Information
Security, 5(4), 14-29, (2020).
J. Konečný, H. B. McMahan, D. Ramage, et al. arXiv
preprint arXiv:1610.02527, (2016).
J. Wang, L. Kong, Z. Huang, et al. Big Data, 7(3), 130-149,
(2021).
J. Xu, B. S. Glicksberg, C. Su, et al. Journal of Healthcare
Informatics Research, 5, 1-19, (2021).
J. Zhang, S. Guo, X. Ma, et al. Advances in Neural
Information Processing Systems, 34, 10092-10104,
(2021).
M. Kang, J. Wang, D. Li, et al. Chinese Journal of
Intelligent Science & Technology, 4(2), (2022).
N. Baracaldo, B. Chen, H. Ludwig, et al. “Mitigating
poisoning attacks on machine learning models: A data
provenance based approach”. In Proceedings of the
10th ACM workshop on artificial intelligence and
security, (2017), pp. 103-110.
N. Baracaldo, B. Chen, H. Ludwig, et al. “Mitigating
poisoning attacks on machine learning models: A data