论文标题
分散的随机方差降低了外部方法
Decentralized Stochastic Variance Reduced Extragradient Method
论文作者
论文摘要
本文研究$ \ min_x \ max_y f(x,y)\ triangleq \ frac {1} {m} {m} \ sum_ {i = 1}^m f_i(x,x,x,y是$ y是$ m $ as the $ m $ as egent astem,ymagngex \ max_y f(x,x,y)\ triangleq \ frac {1} $ f_i(x,y)= \ frac {1} {n} \ sum_ {j = 1}^n f_ {i,j}(x,x,y)$。我们提出了一种新型的分散优化算法,称为多传感随机方差减少了外部外部,该方差可实现此问题的最著名随机一阶甲骨文(SFO)复杂性。具体而言,每个代理都需要$ \ MATHCAL O((((N+κ\ sqrt {n}))\ log(1/\ varepsilon))$ sfo要求强烈convex-convex-stronglongly-concave问题和$ \ $ \ MATHCAL O(N+\ sqrt l/\ vareps $ pogipss $ flog) SFO呼吁一般的凸孔问题问题,以实现预期的$ \ varepsilon $ - 准确解决方案,其中$κ$是条件编号,而$ l $是平滑度参数。数值实验表明,所提出的方法的性能优于基础线。
This paper studies decentralized convex-concave minimax optimization problems of the form $\min_x\max_y f(x,y) \triangleq\frac{1}{m}\sum_{i=1}^m f_i(x,y)$, where $m$ is the number of agents and each local function can be written as $f_i(x,y)=\frac{1}{n}\sum_{j=1}^n f_{i,j}(x,y)$. We propose a novel decentralized optimization algorithm, called multi-consensus stochastic variance reduced extragradient, which achieves the best known stochastic first-order oracle (SFO) complexity for this problem. Specifically, each agent requires $\mathcal O((n+κ\sqrt{n})\log(1/\varepsilon))$ SFO calls for strongly-convex-strongly-concave problem and $\mathcal O((n+\sqrt{n}L/\varepsilon)\log(1/\varepsilon))$ SFO call for general convex-concave problem to achieve $\varepsilon$-accurate solution in expectation, where $κ$ is the condition number and $L$ is the smoothness parameter. The numerical experiments show the proposed method performs better than baselines.