Current Location:home > Detailed Browse

# A Communication Efficient Gradient Compression Algorithm forDistributed Learning

Submit Time: 2020-05-13
Author:
Institute: 1.Tsinghua University; 2.University of Illinois Urbana-Champaign;

## Abstracts

 High communication complexity is a significant bottleneck in many distributed optimization problems. Recently, gradient quantization methods have received much attention in machine learning area. Nevertheless, the theoretical understanding of this class of methods is still limited. For instance, existing works often only characterize the number of bits per iteration, but not the total number of bits needed to achieve a certain accuracy. In this paper, we propose Accelerated Increasing Accuracy and Double Encoding (AIADE), analgorithm which combines Nesterov acceleration and double encoding on the gradient differences. Ouralgorithm uses a new quantization scheme to ensure decreasing error which makes our algorithm differentfrom other works that have constant quantization error in every update. AIADE is proved to haveaccelerated linear convergence rate for strongly convex problems, which is not achieved by existing worksto our knowledge. We also compute the total number of bits of AIADE to achieve a certain accuracy interms of the quantization level, which is missing in most existing works. We further extend our algorithmto limited memory systems and show that a combination of SGD and the proposed algorithm convergesat a linear rate under Weak Growth Condition, thus leading to a small communication complexity.
From: 叶添
DOI：10.12074/202005.00043
Recommended references： Tian Ye,Peijun Xiao,Ruoyu Sun.(2020).A Communication Efficient Gradient Compression Algorithm forDistributed Learning.[ChinaXiv:202005.00043] (Click&Copy)
Version History
Related Paper