论文标题
重新思考保护深度学习的隐私:如何评估和阻止隐私攻击
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks
论文作者
论文摘要
本文研究了针对各种形式的隐私攻击的保护隐私深度学习(PPDL)机制的能力。首先,我们建议对重建,跟踪和会员攻击产生的模型准确性和隐私损失之间的权衡进行定量衡量。其次,我们将重建攻击制定为解决嘈杂的线性方程式系统,并证明如果条件(2)无法实现,则保证攻击将被击败。第三,根据理论分析,提出了一种新颖的秘密极化网络(SPN),以阻止隐私攻击,这对现有的PPDL方法构成了严重的挑战。广泛的实验表明,与基线机制相比,在数据隐私受到令人满意的保护方面,模型精度平均提高了5-20%。
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks. First, we propose to quantitatively measure the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks. Second, we formulate reconstruction attacks as solving a noisy system of linear equations, and prove that attacks are guaranteed to be defeated if condition (2) is unfulfilled. Third, based on theoretical analysis, a novel Secret Polarization Network (SPN) is proposed to thwart privacy attacks, which pose serious challenges to existing PPDL methods. Extensive experiments showed that model accuracies are improved on average by 5-20% compared with baseline mechanisms, in regimes where data privacy are satisfactorily protected.