论文标题
分布式优化,量化梯度下降
Distributed Optimization with Quantized Gradient Descent
论文作者
论文摘要
在本文中,我们考虑了不受限制的分布式优化问题,其中网络中信息的交换由有名的图形拓扑捕获,因此,节点只能与邻居进行通信。此外,在我们的问题中,节点之间的沟通渠道的带宽有限。为了减轻此限制,应在节点之间交换量化的消息。为了解决此分布式优化问题,我们将梯度下降方法与分布式量化的共有算法相结合(这要求节点以有限数量的步骤交换量化的消息并收敛)。具体而言,在每个优化步骤中,每个节点(i)执行一个梯度下降步骤(即,从其当前估计中减去缩放梯度),(ii)对网络中每个节点估算的量化平均值进行有限的时间计算。结果,该算法近似模拟集中梯度下降算法。我们表明,我们的算法渐近地收敛到线性收敛速率的最佳溶液邻域。通过简单的说明性示例证明了所提出的算法的性能。
In this paper, we consider the unconstrained distributed optimization problem, in which the exchange of information in the network is captured by a directed graph topology, thus, nodes can only communicate with their neighbors. Additionally, in our problem, the communication channels among the nodes have limited bandwidth. In order to alleviate this limitation, quantized messages should be exchanged among the nodes. For solving this distributed optimization problem, we combine a gradient descent method with a distributed quantized consensus algorithm (which requires the nodes to exchange quantized messages and converges in a finite number of steps). Specifically, at every optimization step, each node (i) performs a gradient descent step (i.e., subtracts the scaled gradient from its current estimate), and (ii) performs a finite-time calculation of the quantized average of every node's estimate in the network. As a consequence, this algorithm approximately mimics the centralized gradient descent algorithm. We show that our algorithm asymptotically converges to a neighborhood of the optimal solution with linear convergence rate. The performance of the proposed algorithm is demonstrated via simple illustrative examples.