论文标题

教授网络以解决优化问题

Teaching Networks to Solve Optimization Problems

论文作者

Liu, Xinran, Lu, Yuzhe, Abbasi, Ali, Li, Meiyi, Mohammadi, Javad, Kolouri, Soheil

论文摘要

利用机器学习来促进优化过程是一个新兴领域,它具有绕过经典迭代求解器在需要接近实时优化的关键应用中绕过的基本计算瓶颈。现有的大多数方法都集中在学习数据驱动的优化器上,这些优化器可在解决优化方面更少迭代。在本文中,我们采用了另一种方法,并建议将迭代求解器完全替换为可训练的参数集函数,该功能在单个feed向前输出优化问题的最佳参数/参数。我们将我们的方法表示为学习优化优化过程(循环)。我们显示了学习此类参数功能的可行性,以解决各种经典优化问题,包括线性/非线性回归,主成分分析,基于运输的核心和二次编程在供应管理应用程序中。此外,我们提出了两种用于学习此类参数函数的替代方法,在循环中有和没有求解器。最后,通过各种数值实验,我们表明训练有素的求解器可能比经典迭代求解器更快地是数量级,同时提供接近最佳的解决方案。

Leveraging machine learning to facilitate the optimization process is an emerging field that holds the promise to bypass the fundamental computational bottleneck caused by classic iterative solvers in critical applications requiring near-real-time optimization. The majority of existing approaches focus on learning data-driven optimizers that lead to fewer iterations in solving an optimization. In this paper, we take a different approach and propose to replace the iterative solvers altogether with a trainable parametric set function, that outputs the optimal arguments/parameters of an optimization problem in a single feed forward. We denote our method as Learning to Optimize the Optimization Process (LOOP). We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems including linear/nonlinear regression, principal component analysis, transport-based coreset, and quadratic programming in supply management applications. In addition, we propose two alternative approaches for learning such parametric functions, with and without a solver in the LOOP. Finally, through various numerical experiments, we show that the trained solvers could be orders of magnitude faster than the classic iterative solvers while providing near optimal solutions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源