experimentally that the Laplace mechanism is effec-
tive against the DLG attack. As future work, we want
to explore other privacy mechanisms, which may be
more effective in providing a good trade-off between
privacy and accuracy in the context of machine learn-
ing. Furthermore, we are interested in studying more
complex federated learning scenarios where partici-
pants and datasets may change over time.
ACKNOWLEDGEMENTS
The work of Sayan Biswas and Catuscia Palamidessi
was supported by the European Research Council
(ERC) project HYPATIA under the European Union’s
Horizon research and innovation programme, grant
agreement n. 835294. The work of Kangsoo Jung was
supported by ELSA - The European Lighthouse on
Secure and Safe AI, a Network of Excellence funded
by the European Union under the Horizon research
and innovation programme.
REFERENCES
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B.,
Mironov, I., Talwar, K., and Zhang, L. (2016). Deep
learning with differential privacy. In Proceedings of
the 2016 ACM SIGSAC conference on computer and
communications security, pages 308–318.
Agarwal, N., Suresh, A. T., Yu, F. X. X., Kumar, S.,
and McMahan, B. (2018). cpsgd: Communication-
efficient and differentially-private distributed sgd. Ad-
vances in Neural Information Processing Systems, 31.
Andr
´
es, M. E., Bordenabe, N. E., Chatzikokolakis, K., and
Palamidessi, C. (2013). Geo-indistinguishability: Dif-
ferential privacy for location-based systems. In Pro-
ceedings of the 2013 ACM SIGSAC conference on
Computer & communications security, pages 901–
914.
Andrew, G., Thakkar, O., McMahan, B., and Ramaswamy,
S. (2021). Differentially private learning with adaptive
clipping. Advances in Neural Information Processing
Systems, 34.
Bassily, R., Nissim, K., Stemmer, U., and Guha Thakurta,
A. (2017). Practical locally private heavy hitters. In
Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H.,
Fergus, R., Vishwanathan, S., and Garnett, R., editors,
Advances in Neural Information Processing Systems,
volume 30. Curran Associates, Inc.
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A.,
McMahan, H. B., Patel, S., Ramage, D., Segal, A.,
and Seth, K. (2016). Practical secure aggregation for
federated learning on user-held data. arXiv preprint
arXiv:1611.04482.
Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Kone
ˇ
cn
`
y,
J., McMahan, H. B., Smith, V., and Talwalkar, A.
(2018). Leaf: A benchmark for federated settings.
arXiv preprint arXiv:1812.01097.
Chatzikokolakis, K., Andr
´
es, M. E., Bordenabe, N. E., and
Palamidessi, C. (2013). Broadening the scope of dif-
ferential privacy using metrics. In International Sym-
posium on Privacy Enhancing Technologies Sympo-
sium, pages 82–102. Springer.
CMMS (2021). Centers for medicare and medicaid ser-
vices. Accessed: 2022-09-21.
Cohen, G., Afshar, S., Tapson, J., and Van Schaik, A.
(2017). Emnist: Extending mnist to handwritten let-
ters. In 2017 international joint conference on neural
networks (IJCNN), pages 2921–2926. IEEE.
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., and
Naor, M. (2006a). Our data, ourselves: Privacy via
distributed noise generation. In Vaudenay, S., editor,
Advances in Cryptology - EUROCRYPT 2006, pages
486–503, Berlin, Heidelberg. Springer Berlin Heidel-
berg.
Dwork, C., McSherry, F., Nissim, K., and Smith, A.
(2006b). Calibrating noise to sensitivity in private data
analysis. In Halevi, S. and Rabin, T., editors, Theory
of Cryptography, pages 265–284, Berlin, Heidelberg.
Springer Berlin Heidelberg.
Geyer, R. C., Klein, T., and Nabi, M. (2017). Differentially
private federated learning: A client level perspective.
arXiv preprint arXiv:1712.07557.
Ghosh, A., Chung, J., Yin, D., and Ramchandran, K.
(2020). An efficient framework for clustered federated
learning. Advances in Neural Information Processing
Systems, 33:19586–19597.
Goodfellow, I. J., Mirza, M., Xiao, D., Courville, A., and
Bengio, Y. (2013). An empirical investigation of
catastrophic forgetting in gradient-based neural net-
works. arXiv preprint arXiv:1312.6211.
Hitaj, B., Ateniese, G., and Perez-Cruz, F. (2017). Deep
models under the gan: information leakage from col-
laborative deep learning. In Proceedings of the 2017
ACM SIGSAC conference on computer and communi-
cations security, pages 603–618.
Hu, R., Guo, Y., Li, H., Pei, Q., and Gong, Y. (2020). Per-
sonalized federated learning with differential privacy.
IEEE Internet of Things Journal, 7(10):9530–9539.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J.,
Desjardins, G., Rusu, A. A., Milan, K., Quan, J.,
Ramalho, T., Grabska-Barwinska, A., et al. (2017).
Overcoming catastrophic forgetting in neural net-
works. Proceedings of the national academy of sci-
ences, 114(13):3521–3526.
Kone
ˇ
cny, J., McMahan, H. B., Yu, F. X., Richt
´
arik, P.,
Suresh, A. T., and Bacon, D. (2016). Federated learn-
ing: Strategies for improving communication effi-
ciency. arXiv preprint arXiv:1610.05492.
Le M
´
etayer, D. and De, S. J. (2016). PRIAM: a Privacy
Risk Analysis Methodology. In Livraga, G., Torra, V.,
Aldini, A., Martinelli, F., and Suri, N., editors, Data
Privacy Management and Security Assurance, Herak-
lion, Greece. Springer.
Lopez-Paz, D. and Ranzato, M. (2017). Gradient episodic
Group Privacy for Personalized Federated Learning
259