Accelerating Partitioned Edge Learning via Joint Parameter-and-Bandwidth Allocation

In this paper, we consider the framework of partitioned edge learning for iteratively training a large-scale model using many resource-constrained devices (called workers). To this end, in each iteration, the model is dynamically partitioned into parametric blocks, which are downloaded to worker groups for updating using their local data. Then, the local updates are uploaded to and cascaded by the server for updating a global model. To reduce resource usage by minimizing the total learning-and-communication latency, this work focuses on the novel joint design of parameter (computation load) and bandwidth allocation (for downloading and uploading). Two design approaches are adopted. First, a practical sequential approach, called partially integrated parameter-and-bandwidth allocation (PABA), yields one scheme, namely parameter aware bandwidth allocation. It allocates the largest bandwidth to the slowest worker. Second, PABA are jointly optimized. Despite its being a nonconvex problem, an efficient and optimal solution algorithm is derived by intelligently nesting a bisection search and solving a convex problem. Experimental results using real data demonstrate that integrating PABA can substantially improve the performance of partitioned edge learning in terms of latency (by e.g., 46%) and accuracy (by e.g., 4%).

Wen Dingzhu, Bennis Mehdi, Huang Kaibin

A4 Article in conference proceedings

GLOBECOM 2020 - 2020 IEEE Global Communications Conference

D. Wen, M. Bennis and K. Huang, "Accelerating Partitioned Edge Learning via Joint Parameter-and-Bandwidth Allocation," GLOBECOM 2020 - 2020 IEEE Global Communications Conference, Taipei, Taiwan, 2020, pp. 1-6, doi: 10.1109/GLOBECOM42002.2020.9347992

https://doi.org/10.1109/GLOBECOM42002.2020.9347992 http://urn.fi/urn:nbn:fi-fe202102235684