Table 4: Performance results using general MPC protocols: straightforward aggregation of 5 FL contributions containing
1000 updates each, aggregated by two or three computing parties (CP) acting as aggregator servers. Communication cost is
measured for both one CP (2nd row) and all CPs (3rd row) in MB, as well as for both dishonest (DM) and honest majority
(HM) configuration. There is a visible performance gap between semi-honest and dishonest model, as well as between honest
and dishonest majority.
Semi-honest, DM, 2CP Semi-honest, HM, 3CP Dishonest, DM, 2CP Dishonest, DM, 3CP
Time [sec] 69,9 11,8 476,48 518,8
Send, 1CP 7666,7 83,6 41817,3 83634,6
Total send 15333,4 250,2 83613,9 250643
tion i5 processor running at 1.30 GHz using 16 GiB
of RAM on Linux. They confirmed that a straightfor-
ward implementation of distributed aggregation us-
ing general MPC protocols leads to a rather large
overhead in terms of communication and computation
costs. However, the advantage of this approach is that
it enables to switch smoothly between different proto-
cols and thus to easily adjust the security level and the
number of parties. It shows that securing the compu-
tation against an active attacker is possible, although
it comes at a high cost (that can be potentially accept-
able for some of the use cases where securing is a
much higher priority than the speed of the learning).
Moreover, such implementation is robust to dynamic
client or aggregation server dropouts.
5 ANALYSIS
The preliminary results presented in the above sec-
tions showed interesting perspectives when securing
Federated Learning with DP, FHE and MPC. As ex-
pected, there is a delicate trade-off to find between the
guarantees offered by the global DP and the accuracy
of the training model.
As for the results obtained with (F)HE and MPC,
let us analyse and compare the concrete case of 5
clients and a model size of 1000 weights. The
tests show that, under the hypothesis of a honest-
but-curious server, the homomorphic encryption un-
der various flavors (either optimized version of addi-
tive Pailier cryptosystem or classical batching of lev-
elled BFV and CKKS) has better performance results
in terms of execution times and bandwidths require-
ments than the general MPC protocols. This general
conclusion of homomorphic encryption performing
better for the aggregation in the context of federated
learning remains valid for other testing parameters.
Of course, the results with MPC could be ameliorated
through specific protocols and it is still useful in sit-
uations in which homomorphic encryption cannot be
used (see Section 3.2 for details).
ACKNOWLEDGEMENTS
The research leading to these results has received
funding from the European Union’s Preparatory Ac-
tion on Defence Research (PADR-FDDT-OPEN-03-
2019). This paper reflects only the authors’ views and
the Commission is not liable for any use that may be
made of the information contained therein.
REFERENCES
Amiri, S., Belloum, A., Klous, S., and Gommans, L. (2021).
Compressive differentially-private federated learning
through universal vector quantization. Association for
the Advancement of Artificial Intelligence.
Dwork, C. (2006). Differential privacy. volume 2006, pages
1–12. ICALP.
Hitaj, B., Ateniese, G., and Perez-Cruz, F. (2017). Deep
models under the gan: Information leakage from col-
laborative deep learning. In Proceedings of the 2017
ACM SIGSAC Conference on Computer and Commu-
nications Security, CCS ’17, page 603–618.
Kairouz, P. and et al. (2021). Advances and Open Problems
in Federated Learning. arXiv:1912.04977 [cs, stat].
arXiv: 1912.04977 version: 3.
Keller, M. (2020). MP-SPDZ: A versatile framework for
multi-party computation. Cryptology ePrint Archive,
Report 2020/521.
Li, Q., Wen, Z., Wu, Z., Hu, S., Wang, N., Li, Y., Liu, X.,
and He, B. (2021). A Survey on Federated Learning
Systems: Vision, Hype and Reality for Data Privacy
and Protection. IEEE Transactions on Knowledge and
Data Engineering, pages 1–1. arXiv: 1907.09693.
McMahan, H. B., Ramage, D., Talwar, K., and Zhang,
L. (2018). Learning Differentially Private Recurrent
Language Models. arXiv:1710.06963 [cs].
Melis, L., Song, C., De Cristofaro, E., and Shmatikov, V.
(2019). Exploiting unintended feature leakage in col-
laborative learning. In 2019 IEEE Symposium on Se-
curity and Privacy (SP), pages 691–706.
Saxena, A., Goebel, K., Simon, D., and Eklund, N. (2008).
Damage propagation modeling for aircraft engine run-
to-failure simulation. In 2008 International Confer-
ence on Prognostics and Health Management, pages
1–9.
SECRYPT 2022 - 19th International Conference on Security and Cryptography
674