论文标题

隐私表示通过频率过滤编码器学习

Privacy Safe Representation Learning via Frequency Filtering Encoder

论文作者

Jeong, Jonghu, Cho, Minyong, Benz, Philipp, Hwang, Jinwoo, Kim, Jeewook, Lee, Seungkwan, Kim, Tae-hoon

论文摘要

深度学习模型越来越多地部署在现实世界中。这些模型通常在服务器端部署,并在信息丰富的表示中接收用户数据,以求解特定任务,例如图像分类。由于图像可以包含敏感信息,而这些信息可能不愿意共享,因此隐私保护变得越来越重要。对抗表示学习(ARL)是一种训练在客户端运行并混淆图像的编码器的常见方法。假定可以安全地将混淆的图像安全地传输并用于服务器上的任务,而无需隐私问题。但是,在这项工作中,我们发现培训重建攻击者可以成功恢复现有ARL方法的原始图像。为此,我们通过低通滤波引入了一种新颖的ARL方法,从而限制了要在频域中编码的可用信息量。我们的实验结果表明,我们的方法可以承受重建攻击,同时超过了先前有关隐私 - 实用性权衡的最新方法。我们进一步进行用户研究,以定性评估我们对重建攻击的防御。

Deep learning models are increasingly deployed in real-world applications. These models are often deployed on the server-side and receive user data in an information-rich representation to solve a specific task, such as image classification. Since images can contain sensitive information, which users might not be willing to share, privacy protection becomes increasingly important. Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image. It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns. However, in this work, we find that training a reconstruction attacker can successfully recover the original image of existing ARL methods. To this end, we introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain. Our experimental results reveal that our approach withstands reconstruction attacks while outperforming previous state-of-the-art methods regarding the privacy-utility trade-off. We further conduct a user study to qualitatively assess our defense of the reconstruction attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源