论文标题
仔细研究准确性与鲁棒性
A Closer Look at Accuracy vs. Robustness
论文作者
论文摘要
当前训练鲁棒网络的方法导致测试准确性下降,这使得先前的工作认为在深度学习中可能不可避免地会进行稳健的准确性权衡。我们仔细研究了这种现象,首先表明真实图像数据集实际上是分开的。考虑到这个属性,我们证明,通过本地Lipschitz函数,基准数据集都可以实现鲁棒性和准确性,因此,稳健性和准确性之间不应有固有的权衡。通过具有鲁棒性方法的广泛实验,我们认为理论与实践之间的差距来自当前方法的两个局限性:它们要么无法施加局部Lipschitzness,要么没有充分概括。我们探索将辍学与强大的训练方法结合在一起,并获得更好的概括。我们得出的结论是,实践中实现鲁棒性和准确性可能需要使用局部Lipschitzness并通过深度学习概括技术来增强它们的方法。可在https://github.com/yangarbiter/robust-local-lipschitz上找到代码
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of current methods: either they fail to impose local Lipschitzness or they are insufficiently generalized. We explore combining dropout with robust training methods and obtain better generalization. We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness and augmenting them with deep learning generalization techniques. Code available at https://github.com/yangarbiter/robust-local-lipschitz