论文标题

抽象和改进:朝着神经网络的可扩展和精确验证

Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks

论文作者

Liu, Jiaxiang, Xing, Yunhan, Shi, Xiaomu, Song, Fu, Xu, Zhiwu, Ming, Zhong

论文摘要

作为一个新的编程范式,深度神经网络(DNN)在实践中越来越多地部署,但是缺乏鲁棒性阻碍了他们在安全至关重要的领域中的应用。尽管有用于正式保证的DNN验证DNN的技术,但它们的可伸缩性和准确性有限。在本文中,我们提出了一种新颖的抽象式方法,以进行可扩展和精确的DNN验证。具体而言,我们提出了一种新颖的抽象来通过过度透明度分解DNN的大小。如果未报告任何虚假反例,验证抽象DNN的结果始终是结论性的。为了消除抽象提出的虚假反示例,我们提出了一种新颖的反例引导的精致,该精炼精炼了抽象的DNN,以排除给定的伪造反例,同时仍然过度透露原始示例。我们的方法是正交的,并且可以与许多现有的验证技术集成。为了进行演示,我们使用两个有前途和确切的工具Marabou和Planet作为基础验证引擎实施我们的方法,并对广泛使用的基准ACAS XU,MNIST和CIFAR-10进行评估。结果表明,我们的方法可以通过解决更多问题并分别减少86.3%和78.0%的验证时间来提高他们的绩效。与最相关的抽象方法相比,我们的方法是11.6-26.6倍。

As a new programming paradigm, deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains. While there are techniques for verifying DNNs with formal guarantees, they are limited in scalability and accuracy. In this paper, we present a novel abstraction-refinement approach for scalable and exact DNN verification. Specifically, we propose a novel abstraction to break down the size of DNNs by over-approximation. The result of verifying the abstract DNN is always conclusive if no spurious counterexample is reported. To eliminate spurious counterexamples introduced by abstraction, we propose a novel counterexample-guided refinement that refines the abstract DNN to exclude a given spurious counterexample while still over-approximating the original one. Our approach is orthogonal to and can be integrated with many existing verification techniques. For demonstration, we implement our approach using two promising and exact tools Marabou and Planet as the underlying verification engines, and evaluate on widely-used benchmarks ACAS Xu, MNIST and CIFAR-10. The results show that our approach can boost their performance by solving more problems and reducing up to 86.3% and 78.0% verification time, respectively. Compared to the most relevant abstraction-refinement approach, our approach is 11.6-26.6 times faster.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源