Abstract
Federated learning enables model training for the consumer-driven Internet of Things (IoT) in a distributed manner without violating individual privacy. Several secure aggregation protocols have been proposed for large-scale federated learning models in IoT scenarios. However, the communication and computational overhead grow quadratically with the number of clients, which becomes a significant obstacle to these secure aggregation protocols. To address this problem, some work utilizes deterministic graphs of logarithmic degrees, such as the Harary graph or Erd˝=os-Rényi graph, instead of the complete communication graph. The graph generated under the given fixed conditions is unique and invariant throughout the federated learning process. In this paper, we propose SparsiFL, a graph sparsification-based secure aggregation protocol for federated learning, which significantly reduces communication and computational overhead while maintaining correctness and privacy. The SparsiFL takes a complete graph as input and formulates the optimization problem as an uncertain graph sparsification task, which reduces the number of edges and redistributes the probabilities attached to them. In the process, SparsiFL also preserves the underlying structure. The graph can accurately and efficiently approximate the secure-sharing task in secure aggregation. Theoretical analysis shows correctness and privacy. Experiments show that SparsiFL reduces the communication and computational overheads up to 6.10× and 3.16× as compared to other related approaches.
Original language | English |
---|---|
Number of pages | 13 |
Journal | IEEE Transactions on Consumer Electronics |
Early online date | 7 Jun 2024 |
DOIs | |
Publication status | E-pub ahead of print - 7 Jun 2024 |
Bibliographical note
Copyright © 2024, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Keywords
- Communication-efficient learning
- Federated learning
- Graph sparsification
- Secure aggregation