论文标题

抓取:发现比随机重新安装的数据排列更好

GraB: Finding Provably Better Data Permutations than Random Reshuffling

论文作者

Lu, Yucheng, Guo, Wentao, De Sa, Christopher

论文摘要

随机重新安装,将每个时期的数据集随机排列,在模型训练中被广泛采用,因为它的收敛速度比使用更换采样更快。最近的研究表明,贪婪选择的数据顺序可以以使用更多的计算和记忆为代价,从经验上进一步加速收敛。但是,贪婪的秩序缺乏理论上的理由,并且由于其非平凡的记忆和计算开销而有限的效用。在本文中,我们首先制定了一个名为Herding的示例订购框架,并肯定地回答了SGD,放牧的速度$ O(T^{ - 2/3})$在平稳,非convex目标上,比$ o(n^{1/3} t^{-2/3})$ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ N迭代次数。 To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension.我们对包括MNIST,CIFAR10,WIKITEXT和GLUE在内的应用程序进行了经验性展示,Grab可以在培训和验证绩效方面超越随机的随机改组,甚至超过$ 100 \ times $ $ $。

Random reshuffling, which randomly permutes the dataset each epoch, is widely adopted in model training because it yields faster convergence than with-replacement sampling. Recent studies indicate greedily chosen data orderings can further speed up convergence empirically, at the cost of using more computation and memory. However, greedy ordering lacks theoretical justification and has limited utility due to its non-trivial memory and computation overhead. In this paper, we first formulate an example-ordering framework named herding and answer affirmatively that SGD with herding converges at the rate $O(T^{-2/3})$ on smooth, non-convex objectives, faster than the $O(n^{1/3}T^{-2/3})$ obtained by random reshuffling, where $n$ denotes the number of data points and $T$ denotes the total number of iterations. To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension. We show empirically on applications including MNIST, CIFAR10, WikiText and GLUE that GraB can outperform random reshuffling in terms of both training and validation performance, and even outperform state-of-the-art greedy ordering while reducing memory usage over $100\times$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源