论文标题
反例引导的单调神经网络学习
Counterexample-Guided Learning of Monotonic Neural Networks
论文作者
论文摘要
深度学习的广泛采用通常归因于其自动特征构建,并具有最小的感应偏见。但是,在许多实际任务中,学习的功能旨在满足特定领域的约束。我们专注于单调性约束,这些约束很常见,并要求该函数的输出随着特定输入特征的增加而增加。我们开发了一种反例引导的技术,可以在预测时间内实施单调性约束。此外,我们提出了一种将单调性作为深度学习的诱导偏见的技术。它通过迭代地将单调性反例纳入学习过程而起作用。与单调学习中的先前工作相反,我们针对一般的Relu神经网络,并且没有进一步限制假设空间。我们已经在称为彗星的工具中实现了这些技术。现实世界数据集的实验表明,与现有的单调学习者相比,我们的方法可以实现最新的结果,并且与未经单调性约束而受过训练的人相比,可以提高模型质量。
The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning. It works by iteratively incorporating monotonicity counterexamples in the learning process. Contrary to prior work in monotonic learning, we target general ReLU neural networks and do not further restrict the hypothesis space. We have implemented these techniques in a tool called COMET. Experiments on real-world datasets demonstrate that our approach achieves state-of-the-art results compared to existing monotonic learners, and can improve the model quality compared to those that were trained without taking monotonicity constraints into account.