论文标题
联邦学习的当地差异隐私
Local Differential Privacy for Federated Learning
论文作者
论文摘要
高级对抗攻击(例如会员推理和模型记忆)可以使联合学习(FL)易受伤害和潜在泄漏敏感的私人数据。与其他差异性私有(DP)解决方案相比,由于更强大的隐私概念和对数据分配的本地支持,本地差异私人(LDP)方法正在越来越受欢迎。但是,DP方法假定FL服务器(汇总模型)是诚实的(诚实地运行FL协议)或半honest(诚实地运行FL协议,同时也试图学习尽可能多的信息)。这些假设使这种方法对现实世界设置不切实际,不可靠。此外,在现实世界中的工业环境(例如医疗保健)中,分布式实体(例如,医院)已经由本地运行的机器学习模型组成(此设置也称为跨隔离设置)。现有的方法不能在这种情况下使用不受信任的当事方提供可扩展的保护FL的可扩展机制。本文提出了针对工业环境的新的本地差异私人FL(名为LDPFL)协议。 LDPFL可以在具有不受信任的实体的工业环境中运行,同时比现有方法执行更强的隐私保证。与现有方法相比,LDPFL在较小的隐私预算(例如Epsilon = 0.5)下显示出高的FL模型性能(例如Epsilon = 0.5)。
Advanced adversarial attacks such as membership inference and model memorization can make federated learning (FL) vulnerable and potentially leak sensitive private data. Local differentially private (LDP) approaches are gaining more popularity due to stronger privacy notions and native support for data distribution compared to other differentially private (DP) solutions. However, DP approaches assume that the FL server (that aggregates the models) is honest (run the FL protocol honestly) or semi-honest (run the FL protocol honestly while also trying to learn as much information as possible). These assumptions make such approaches unrealistic and unreliable for real-world settings. Besides, in real-world industrial environments (e.g., healthcare), the distributed entities (e.g., hospitals) are already composed of locally running machine learning models (this setting is also referred to as the cross-silo setting). Existing approaches do not provide a scalable mechanism for privacy-preserving FL to be utilized under such settings, potentially with untrusted parties. This paper proposes a new local differentially private FL (named LDPFL) protocol for industrial settings. LDPFL can run in industrial settings with untrusted entities while enforcing stronger privacy guarantees than existing approaches. LDPFL shows high FL model performance (up to 98%) under small privacy budgets (e.g., epsilon = 0.5) in comparison to existing methods.