Authors:
Elia Pacioni
1
;
2
;
Francisco Fernández De Vega
2
and
Davide Calvaresi
1
Affiliations:
1
University of Applied Sciences and Arts of Western Switzerland (HES-SO Valais/Wallis), Rue de l’Industrie 23, Sion, 1950, Switzerland
;
2
Universidad de Extremadura, Av. Santa Teresa de Jornet, 38, Mérida, 06800, Spain
Keyword(s):
Federated Learning, Multi-Agents System, Models Aggregation, Communication Efficiency, Genetic Programming.
Abstract:
Federated Learning (FL) enables collaborative training of machine learning models while preserving client data privacy. However, its conventional client-server paradigm presents two key challenges: (i) communication efficiency and (ii) model aggregation optimization. Inefficient communication, often caused by transmitting low-impact updates, results in unnecessary overhead, particularly in bandwidth-constrained environments such as wireless or mobile networks or in scenarios with numerous clients. Furthermore, traditional aggregation strategies lack the adaptability required for stable convergence and optimal performance. This paper emphasizes the distributed nature of FL clients (agents) and advocates for local, autonomous, and intelligent strategies to evaluate the significance of their updates—such as using a “distance” metric relative to the global model. This approach improves communication efficiency by prioritizing impactful updates. Additionally, the paper proposes an adaptiv
e aggregation method leveraging genetic programming and transfer learning to dynamically evolve aggregation equations, optimizing the convergence process. By integrating insights from multi-agent systems, the proposed approach aims to foster more efficient and robust frameworks for decentralized learning.
(More)