论文标题

非convex非平滑正规化学习的有效近端方法

Effective Proximal Methods for Non-convex Non-smooth Regularized Learning

论文作者

Liang, Guannan, Tong, Qianqian, Ding, Jiahao, Pan, Miao, Bi, Jinbo

论文摘要

稀疏学习是从高维数据中挖掘有用信息和模式的非常重要的工具。非convex非平滑正规学习问题在稀疏学习中起着重要的作用,并且最近引起了广泛的关注。我们通过应用任意采样来解决一种随机近端梯度方法的家庭,以解决非凸面和非平滑规则的经验风险最小化问题。这些方法在计算随机梯度时根据任意概率分布绘制训练示例的小批次。开发了一种统一的分析方法来检查这些方法的收敛性和计算复杂性,从而使我们能够比较不同的采样方案。我们表明,独立的采样方案倾向于改善通常使用的均匀采样方案的性能。与迄今为止最好的采样相比,我们的新分析还提出了均匀抽样的收敛速度的更严格的界限。经验评估表明,所提出的算法的收敛速度比最新的状态更快。

Sparse learning is a very important tool for mining useful information and patterns from high dimensional data. Non-convex non-smooth regularized learning problems play essential roles in sparse learning, and have drawn extensive attentions recently. We design a family of stochastic proximal gradient methods by applying arbitrary sampling to solve the empirical risk minimization problem with a non-convex and non-smooth regularizer. These methods draw mini-batches of training examples according to an arbitrary probability distribution when computing stochastic gradients. A unified analytic approach is developed to examine the convergence and computational complexity of these methods, allowing us to compare the different sampling schemes. We show that the independent sampling scheme tends to improve performance over the commonly-used uniform sampling scheme. Our new analysis also derives a tighter bound on convergence speed for the uniform sampling than the best one available so far. Empirical evaluations demonstrate that the proposed algorithms converge faster than the state of the art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源