论文标题

分析强大卷积神经网络的随机扰动

Analysis of Random Perturbations for Robust Convolutional Neural Networks

论文作者

Dziedzic, Adam, Krishnan, Sanjay

论文摘要

最近的工作广泛表明,神经网络的随机扰动可以改善对对抗性攻击的鲁棒性。但是,文献缺乏对最新提案的详细比较和对比,以了解哪些类别的扰动工作,工作时起作用以及为什么工作。我们贡献了详细的评估,阐明了这些问题和基准基于扰动的防御能力。特别是,我们展示了五个主要结果:(1)所有输入扰动防御措施,无论是随机的还是确定性的,在其功效上都是等效的,(2)攻击在扰动防御之间的攻击转移,因此攻击者不需要知道特定类型的防御类型 - 只有它涉及扰动,它涉及跨越最佳的噪声范围,(3)最佳的噪声范围,(3)强大的稳定性(3)强大的empirations(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)(4)除非在训练过程中观察到这些扰动,否则自适应攻击,(5)在原始输入的接近社区中的对抗示例显示对一阶和二阶分析中对扰动的敏感性升高。

Recent work has extensively shown that randomized perturbations of neural networks can improve robustness to adversarial attacks. The literature is, however, lacking a detailed compare-and-contrast of the latest proposals to understand what classes of perturbations work, when they work, and why they work. We contribute a detailed evaluation that elucidates these questions and benchmarks perturbation based defenses consistently. In particular, we show five main results: (1) all input perturbation defenses, whether random or deterministic, are equivalent in their efficacy, (2) attacks transfer between perturbation defenses so the attackers need not know the specific type of defense -- only that it involves perturbations, (3) a tuned sequence of noise layers across a network provides the best empirical robustness, (4) perturbation based defenses offer almost no robustness to adaptive attacks unless these perturbations are observed during training, and (5) adversarial examples in a close neighborhood of original inputs show an elevated sensitivity to perturbations in first and second-order analyses.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源