论文标题

约束优化的近端距离方法的扩展

Extensions to the Proximal Distance Method of Constrained Optimization

论文作者

Landeros, Alfonso, Padilla, Oscar Hernan Madrid, Zhou, Hua, Lange, Kenneth

论文摘要

The current paper studies the problem of minimizing a loss $f(\boldsymbol{x})$ subject to constraints of the form $\boldsymbol{D}\boldsymbol{x} \in S$, where $S$ is a closed set, convex or not, and $\boldsymbol{D}$ is a matrix that fuses parameters.融合约束可以捕获平滑度,稀疏性或更一般的约束模式。为了解决这一通用类问题,我们将Beltrami-Courant惩罚方法与距离距离原则相结合。后者是由惩罚目标最小化$ f(\ boldsymbol {x})+\fracρ{2} \ text {dist}(\ boldsymbol {d} \ boldsymbol {x},s)^2 $ \ boldsymbol {d} \ boldsymbol {x} $从$ s $。下一个迭代$ \ boldsymbol {x} _ {n+1} $的近端距离算法是从当前迭代$ \ boldsymbol {x} _n $构造的$ f(\ boldsymbol {x})+\fracρ{2} \ | \ boldsymbol {d} \ boldsymbol {x} - \ Mathcal {p} _ {s} _ {s}(\ boldsymbol对于固定的$ρ$和亚分析损失$ f(\ boldsymbol {x})$和亚分析约束集$ s $,我们证明了汇合到固定点。在更强的假设下,我们提供收敛速率并证明线性局部收敛。我们还构建了最陡的下降(SD)变体,以避免昂贵的线性系统求解。为了基准我们的算法,我们将乘数的交替方向方法(ADMM)进行比较。我们的广泛数值测试包括公制投影,凸回归,凸聚类,总变化图像降解以及矩阵对良好状态数的投影。这些实验证明了我们在高维问题上最陡峭的变体的速度和可接受的准确性。

The current paper studies the problem of minimizing a loss $f(\boldsymbol{x})$ subject to constraints of the form $\boldsymbol{D}\boldsymbol{x} \in S$, where $S$ is a closed set, convex or not, and $\boldsymbol{D}$ is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method with the proximal distance principle. The latter is driven by minimization of penalized objectives $f(\boldsymbol{x})+\fracρ{2}\text{dist}(\boldsymbol{D}\boldsymbol{x},S)^2$ involving large tuning constants $ρ$ and the squared Euclidean distance of $\boldsymbol{D}\boldsymbol{x}$ from $S$. The next iterate $\boldsymbol{x}_{n+1}$ of the corresponding proximal distance algorithm is constructed from the current iterate $\boldsymbol{x}_n$ by minimizing the majorizing surrogate function $f(\boldsymbol{x})+\fracρ{2}\|\boldsymbol{D}\boldsymbol{x}-\mathcal{P}_{S}(\boldsymbol{D}\boldsymbol{x}_n)\|^2$. For fixed $ρ$ and a subanalytic loss $f(\boldsymbol{x})$ and a subanalytic constraint set $S$, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare against the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源