Time-Triggered Federated Learning Over Wireless Networks

The newly emerging federated learning (FL) framework offers a new way to train machine learning models in a privacy-preserving manner. However traditional FL algorithms are based on an event-triggered aggregation which suffers from stragglers and communication overhead issues. To address these issues in this paper we present a time-triggered FL algorithm (TT-Fed) over wireless networks which is a generalized form of classic synchronous and asynchronous FL. Taking the constrained resource and unreliable nature of wireless communication into account we jointly study the user selection and bandwidth optimization problem to minimize the FL training loss. To solve this joint optimization problem we provide a thorough convergence analysis for TT-Fed. Based on the obtained analytical convergence upper bound the optimization problem is decomposed into tractable sub-problems with respect to each global aggregation round and finally solved by our proposed online search algorithm. Simulation results show that compared to asynchronous FL (FedAsync) and FL with asynchronous user tiers (FedAT) benchmarks our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5% respectively under highly imbalanced and non-IID data while substantially reducing the communication overhead.