
malicious model updates from client contributions in
federated settings.Our research builds upon this foun-
dation by specifically investigating the effectiveness
of autoencoders in detecting malicious updates in dif-
ferentially private federated learning settings. To the
best of our knowledge, our work represents the first
attempt to systematically evaluate and quantify the
performance of autoencoders in this context, thereby
advancing our understanding of their role in ensur-
ing the security and reliability of differentially private
federated learning systems.
6 CONCLUSION AND FUTURE
DIRECTIONS
This paper delves into the potential of autoencoders,
renowned for their data representation and recon-
struction capabilities, as a solution for identifying
anomalous updates in differentially private federated
learning (DP-FL). Through empirical analysis, we as-
sessed autoencoders’ efficacy, addressing associated
challenges to enhance differentially private federated
learning’s integrity in practical scenarios. Future di-
rections for this work encompass exploring other at-
tacks beyond malicious updates, such as adversarial
learning approaches. Additionally, robustness analy-
sis is crucial, requiring evaluation under diverse sce-
narios and datasets to assess its generalization perfor-
mance under varying levels of noise and data distri-
bution.
REFERENCES
Alishahi, M., Moghtadaiee, V., and Navidan, H. (2022).
Add noise to remove noise: Local differential pri-
vacy for feature selection. Computers & Security,
123:102934.
An, J. and Cho, S. (2015). Variational autoencoder based
anomaly detection using reconstruction probability.
Special lecture on IE, 2(1):1–18.
Bank, D., Koenigstein, N., and Giryes, R. (2023). Autoen-
coders, pages 353–374. Springer International Pub-
lishing, Cham.
Cina, A. E., Grosse, K., Demontis, A., Vascon, S., Zellinger,
W., Moser, B. A., Oprea, A., Biggio, B., Pelillo, M.,
and Roli, F. (2023). Wild Patterns Reloaded: A Survey
of Machine Learning Security against Training Data
Poisoning. ACM Computing Surveys, 55(13s):294:1–
294:39.
Dwork, C. (2008). Differential privacy: A survey of results.
In Theory and Applications of Models of Computation
TAMC, volume 4978 of Lecture Notes in Computer
Science, pages 1–19. Springer Verlag.
Fathalizadeh, A., Moghtadaiee, V., and Alishahi, M.
(2024). Indoor geo-indistinguishability: Adopting dif-
ferential privacy for indoor location data protection.
IEEE Transactions on Emerging Topics in Computing,
12(1):293–306.
Geyer, R. C., Klein, T., and Nabi, M. (2017). Differentially
private federated learning: A client level perspective.
CoRR, abs/1712.07557.
Gu, Z. and Yang, Y. (2021). Detecting Malicious Model Up-
dates from Federated Learning on Conditional Varia-
tional Autoencoder. In Parallel and Distributed Pro-
cessing Symposium (IPDPS), pages 671–680.
Idrissi, M. J., Alami, H., El Mahdaouy, A., El Mekki, A.,
Oualil, S., Yartaoui, Z., and Berrada, I. (2023). Fed-
ANIDS: Federated learning for anomaly-based net-
work intrusion detection systems. Expert Systems with
Applications, 234:121000.
Li, L., Fan, Y., Tse, M., and Lin, K.-Y. (2020a). A review
of applications in federated learning. Computers &
Industrial Engineering, 149:106854.
Li, Y., Chang, T.-H., and Chi, C.-Y. (2020b). Secure feder-
ated averaging algorithm with differential privacy. In
IEEE Workshop on Machine Learning for Signal Pro-
cessing (MLSP), pages 1–6.
Lopuha
¨
a-Zwakenberg, M., Alishahi, M., Kivits, J., Klaren-
beek, J., van der Velde, G., and Zannone, N. (2021).
Comparing classifiers’ performance under differential
privacy. In Conference on Security and Cryptography,
SECRYPT, pages 50–61. SCITEPRESS.
Rosenberg, I., Shabtai, A., Elovici, Y., and Rokach, L.
(2021). Adversarial machine learning attacks and de-
fense methods in the cyber security domain. ACM
Comput. Surv., 54(5).
Schram, G., Wang, R., and Liang, K. (2022). Using au-
toencoders on differentially private federated learning
gans.
Wei, K., Li, J., Ding, M., Ma, C., Yang, H. H., Farokhi,
F., Jin, S., Quek, T. Q. S., and Vincent Poor, H.
(2020). Federated learning with differential privacy:
Algorithms and performance analysis. 15:3454–3469.
Conference Name: IEEE Transactions on Information
Forensics and Security.
Yan, S., Shao, H., Xiao, Y., Liu, B., and Wan, J. (2023).
Hybrid robust convolutional autoencoder for unsu-
pervised anomaly detection of machine tools under
noises. Robotics and Computer-Integrated Manufac-
turing, 79:102441.
Yang, Y., Hui, B., Yuan, H., Gong, N., and Cao, Y. (2023).
{PrivateFL}: Accurate, differentially private feder-
ated learning via personalized data transformation. In
USENIX Security Symposium, pages 1595–1612.
Zhang, Z., Cao, X., Jia, J., and Gong, N. Z. (2022). Flde-
tector: Defending federated learning against model
poisoning attacks via detecting malicious clients. In
ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, KDD.
Zhao, Y., Chen, J., Zhang, J., Wu, D., Blumenstein, M.,
and Yu, S. (2022). Detecting and mitigating poison-
ing attacks in federated learning using generative ad-
versarial networks. Concurrency and Computation:
Practice and Experience, 34(7):e5906.
SECRYPT 2024 - 21st International Conference on Security and Cryptography
474