Authors:
Oana Stan
1
;
Vincent Thouvenot
2
;
Aymen Boudguiga
1
;
Katarzyna Kapusta
2
;
Martin Zuber
1
and
Renaud Sirdey
1
Affiliations:
1
Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France
;
2
THALES ThereSIS, France
Keyword(s):
Federated Learning, Homomorphic Encryption, Multi-party Computation, Differential Privacy.
Abstract:
Federated Learning is established as one of the most efficient collaborative learning approaches aiming at training different client models using private datasets. By private, we mean that clients’ datasets are never disclosed as they serve to train clients’ models locally. Then, a central server is in charge of aggregating the different models’ weights. The central server is generally a honest-but-curious entity that may be interested in collecting information about clients datasets by using model inversion or membership inference. In this paper, we discuss different cryptographic options for providing a secure Federated Learning framework. We investigate the use of Differential Privacy, Homomorphic Encryption and Multi-Party Computation (MPC) for confidential data aggregation while considering different threat models. In our homomorphic encryption approach, we compare results obtained with an optimized version of the Paillier cryptosystem to those obtained with BFV and CKKS. As for
MPC technique, different general protocols are tested under various security assumptions. Overall we have found HE to have better performance, for a lower bandwidth usage.
(More)