• Data resampling: Non-independent and non-
identically distributed data can be resampled
to enhance uniformity and independence,
thereby mitigating their impact (Li et al. 202).
• Clustering and hierarchical aggregation:
Devices exhibiting similar data distributions
can be grouped into clusters, allowing for
local training and subsequent global
aggregation within each cluster. This
approach helps alleviate the effects of
dependent and identically distributed data
(Bendiab et al. 2019).
• Meta-learning and transfer learning: Utilizing
meta-learning and transfer learning
techniques enables the acquisition of an
improved global model in FL, better suited to
handle situations involving dependent and
identically distributed data.
In the local training process, the introduction of
model-contrastive loss aids in resolving issues
associated with dependent identically distributed
data.
6 CONCLUSION
Through the investigation, it is found that Federated
Averaging (FedAvg) outperforms Federated
Stochastic Gradient Descent (FedSGD) in terms of
accuracy while requiring fewer communication
rounds. Both FedAvg and FedSGD update local
models on respective devices and then transmit the
average values of model parameters to the central
server to enhance communication efficiency and
reduce traffic. This communication strategy
effectively mitigates communication costs and
enhances model performance in federated learning
tasks. To ensure privacy, mechanisms like differential
privacy are integrated into the communication
process, safeguarding users' private data. This lays a
foundation for the widespread adoption of FL in
practical applications and enhances its scalability
across large-scale heterogeneous devices.
In various image classification tasks, MOON
demonstrates superiority over other advanced FL
methods. The MOON algorithm has yielded
promising results in handling non-IID, thereby
enhancing the applicability of FL in real-world
scenarios. By dynamically weighting and rescaling
dependent identically distributed data, the MOON
algorithm contributes to improving the performance
of the FL model in such cases.
Further efforts are directed towards enhancing the
algorithm to bolster the protection of user privacy
data, including the application of technologies like
differential privacy and homomorphic encryption.
This paper explores methods to better adapt to
heterogeneous devices and non-standardized data,
thereby enhancing the practical applicability of FL in
real-world scenarios.
Encouraging interdisciplinary collaboration and
integrating methodologies from various fields such as
distributed optimization, communication network
optimization, data mining, and privacy protection will
further promote the role of federated learning in a
broader range of application scenarios.
REFERENCES
G. Bendiab, S. Shiaeles, S. Boucherkha, et al. Computers &
Security, 86, 270-290, (2019).
H. Zhu, J. Xu, S. Liu, et al. Neurocomputing, 465, 371-390,
(2021).
J. Konečný, H. B. McMahan, F. X. Yu, et al. arXiv Preprint
arXiv:1610.05492, (2016).
M. Laroui, B. Nour, H. Moungla, et al. Computer
Communications, 180, 210-231, (2021).
M. N. Fekri, K. Grolinger, S. Mir. International Journal of
Electrical Power & Energy Systems, 137, 107669,
(2022).
Q. Li, B. He, D. Song. "Model-contrastive FL". In
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, (2021), pp. 10713-
10722.
Q. Li, Y. Diao, Q. Chen, et al. "FL on non-iid data silos: An
experimental study". In 2022 IEEE 38th International
Conference on Data Engineering (ICDE), (2022), pp.
965-978.
Q. Li, Y. Diao, Q. Chen, et al. "FL on non-iid data silos: An
experimental study". In 2022 IEEE 38th International
Conference on Data Engineering (ICDE), (2022), pp.
965-978.
S. Caldas, J. Konečny, H. B. McMahan arXiv Preprint
arXiv:1812.07210, (2018).
X. Li, K. Huang, W. Yang, et al. arXiv Preprint
arXiv:1907.02189, (2019).