Brisimi, T. S., Chen, R., Mela, T., Olshevsky, A., Pascha-
lidis, I. C., and Shi, W. (2018). Federated learning
of predictive models from federated electronic health
records. Int. J. Medical Informatics, 112:59–67.
Brunet, D., Vrscay, E. R., and Wang, Z. (2012). On
the Mathematical Properties of the Structural Similar-
ity Index. IEEE Transactions on Image Processing,
21(4):1488–1499.
Chen, J., Pan, X., Monga, R., Bengio, S., and Jozefowicz,
R. (2017). Revisiting distributed synchronous sgd.
arXiv, pages 1–10.
Chilimbi, T. M., Suzue, Y., Apacible, J., and Kalyanara-
man, K. (2014). Project adam: Building an efficient
and scalable deep learning training system. In Flinn,
J. and Levy, H., editors, OSDI 2014, pages 571–582,
Broomfield, CO, USA. USENIX Association.
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Le,
Q. V., Mao, M. Z., Ranzato, M., Senior, A. W., Tucker,
P. A., Yang, K., and Ng, A. Y. (2012). Large scale
distributed deep networks. In Bartlett, P. L., Pereira,
F. C. N., Burges, C. J. C., Bottou, L., and Weinberger,
K. Q., editors, NIPS 2012, pages 1232–1240, Lake
Tahoe, NV, USA. Curran Associates, Inc.
DeWitt, D. J. and Gray, J. (1992). Parallel database sys-
tems: The future of high performance database sys-
tems. Commun. ACM, 35(6):85–98.
El-Mhamdi, E., Guerraoui, R., Guirguis, A., Hoang, L. N.,
and Rouault, S. (2020). Genuinely distributed byzan-
tine machine learning. In Emek, Y. and Cachin, C.,
editors, ACM PODC 2020, Virtual Event, pages 355–
364, Italy. ACM.
Geiping, J., Bauermeister, H., Dr
¨
oge, H., and Moeller, M.
(2020). Inverting gradients - how easy is it to break
privacy in federated learning? In Larochelle, H., Ran-
zato, M., Hadsell, R., Balcan, M., and Lin, H., editors,
NeurIPS 2020, 2020, virtual.
Goldreich, O. (1998). Secure multi-party computation.
Manuscript. Preliminary version, 78.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Ben-
nis, M., Bhagoji, A. N., Bonawitz, K. A., Charles, Z.,
Cormode, G., Cummings, R., D’Oliveira, R. G. L.,
Rouayheb, S. E., Evans, D., Gardner, J., Garrett, Z.,
Gasc
´
on, A., Ghazi, B., Gibbons, P. B., Gruteser, M.,
Harchaoui, Z., He, C., He, L., Huo, Z., Hutchinson,
B., Hsu, J., Jaggi, M., Javidi, T., Joshi, G., Kho-
dak, M., Kone
ˇ
cn
´
y, J., Korolova, A., Koushanfar, F.,
Koyejo, S., Lepoint, T., Liu, Y., Mittal, P., Mohri, M.,
Nock, R.,
¨
Ozg
¨
ur, A., Pagh, R., Raykova, M., Qi, H.,
Ramage, D., Raskar, R., Song, D., Song, W., Stich,
S. U., Sun, Z., Suresh, A. T., Tram
`
er, F., Vepakomma,
P., Wang, J., Xiong, L., Xu, Z., Yang, Q., Yu, F. X., Yu,
H., and Zhao, S. (2019). Advances and open problems
in federated learning.
Kone
ˇ
cn
´
y, J., McMahan, H. B., Yu, F. X., Richt
´
arik, P.,
Suresh, A. T., and Bacon, D. (2016). Federated Learn-
ing: Strategies for Improving Communication Effi-
ciency. pages 1–10.
Krizhevsky, A. (2009). Learning multiple layers of fea-
tures from tiny images. Technical report, University
of Toronto.
LeCun, Y. (1998). The mnist database of handwritten digits.
Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed,
A., Josifovski, V., Long, J., Shekita, E. J., and Su, B.
(2014). Scaling distributed machine learning with the
parameter server. In Flinn, J. and Levy, H., editors,
OSDI 2014, pages 583–598, Broomfield, CO, USA.
USENIX Association.
Li, T., Sahu, A. K., Talwalkar, A., and Smith, V. (2020).
Federated learning: Challenges, methods, and future
directions. IEEE Signal Process. Mag., 37(3):50–60.
Lian, X., Zhang, C., Zhang, H., Hsieh, C., Zhang, W., and
Liu, J. (2017). Can decentralized algorithms outper-
form centralized algorithms? A case study for decen-
tralized parallel stochastic gradient descent. In Guyon,
I., von Luxburg, U., Bengio, S., Wallach, H. M., Fer-
gus, R., Vishwanathan, S. V. N., and Garnett, R., edi-
tors, NIPS 2017, pages 5330–5340, Long Beach, CA,
USA.
Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, B. (2018).
Deep gradient compression: Reducing the commu-
nication bandwidth for distributed training. In ICLR
2018, Vancouver, BC, Canada. OpenReview.net.
Liu, D. C. and Nocedal, J. (1989). On the limited mem-
ory BFGS method for large scale optimization. Math.
Program., 45(1-3):503–528.
Liu, Y., Kang, Y., Xing, C., Chen, T., and Yang, Q. (2020).
A secure federated transfer learning framework. IEEE
Intelligent Systems, 35(4):70–82.
Mania, H., Pan, X., Papailiopoulos, D., Recht, B., Ram-
chandran, K., and Jordan, M. I. (2017). Perturbed iter-
ate analysis for asynchronous stochastic optimization.
SIAM Journal on Optimization, 27(4):2202–2229.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and
y Arcas, B. A. (2017). Communication-efficient learn-
ing of deep networks from decentralized data. In
Singh, A. and Zhu, X. J., editors, AISTATS 2017 ,
pages 1273–1282, Fort Lauderdale, FL, USA. PMLR.
McMahan, H. B., Ramage, D., Talwar, K., and Zhang, L.
(2018). Learning differentially private recurrent lan-
guage models. In ICLR 2018, Vancouver, BC, Canada.
OpenReview.net.
Melis, L., Song, C., Cristofaro, E. D., and Shmatikov, V.
(2018). Inference attacks against collaborative learn-
ing. CoRR, abs/1805.04049:1–16.
Mohassel, P. and Zhang, Y. (2017). SecureML: A Sys-
tem for Scalable Privacy-Preserving Machine Learn-
ing. In IEEE SP 2017, pages 19–38, San Francisco,
CA, USA. ISSN: 2375-1207.
Orm
´
andi, R., Heged
¨
us, I., and Jelasity, M. (2013). Gossip
learning with linear models on fully distributed data.
Concurr. Comput. Pract. Exp., 25(4):556–571.
Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and Moriai,
S. (2017). Privacy-Preserving Deep Learning: Revis-
ited and Enhanced. In Batten, L., Kim, D. S., Zhang,
X., and Li, G., editors, Applications and Techniques in
Information Security, Communications in Computer
and Information Science, pages 100–110, Singapore.
Springer.
Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and
Moriai, S. (2018). Privacy-Preserving Deep Learn-
DATA 2022 - 11th International Conference on Data Science, Technology and Applications
268