论文标题

机器学习中推理攻击的整体风险评估

Holistic risk assessment of inference attacks in machine learning

论文作者

Yang, Yang

论文摘要

随着机器学习的扩大应用程序,存在越来越多的不符合性的隐私和安全问题。特别是针对机器学习模型的推论攻击使对手可以推断有关目标模型的敏感信息,例如培训数据,模型参数等。推理攻击可能会导致严重的后果,包括违反个人隐私,损害机器学习模型所有者的知识产权。至于相关,研究人员已经在孤立地研究和分析了几种类型的推理攻击,但是仍然缺乏对针对机器学习模型的推理攻击的整体rick评估,例如它们在不同的情况下的应用,这些常见因素,影响这些攻击及其攻击之间的关系的常见因素。结果,本文对针对机器学习模型的不同推理攻击进行了整体风险评估。本文重点介绍了三种代表性攻击:成员推理攻击,属性推理攻击和模型窃取攻击。并建立了威胁模型分类学。使用三个模型架构,包括Alexnet,Resnet18和Simple CNN,共有12个目标模型,在四个数据集上进行了培训,即Celeba,Utkface,STL10和FMNIST。

As machine learning expanding application, there are more and more unignorable privacy and safety issues. Especially inference attacks against Machine Learning models allow adversaries to infer sensitive information about the target model, such as training data, model parameters, etc. Inference attacks can lead to serious consequences, including violating individuals privacy, compromising the intellectual property of the owner of the machine learning model. As far as concerned, researchers have studied and analyzed in depth several types of inference attacks, albeit in isolation, but there is still a lack of a holistic rick assessment of inference attacks against machine learning models, such as their application in different scenarios, the common factors affecting the performance of these attacks and the relationship among the attacks. As a result, this paper performs a holistic risk assessment of different inference attacks against Machine Learning models. This paper focuses on three kinds of representative attacks: membership inference attack, attribute inference attack and model stealing attack. And a threat model taxonomy is established. A total of 12 target models using three model architectures, including AlexNet, ResNet18 and Simple CNN, are trained on four datasets, namely CelebA, UTKFace, STL10 and FMNIST.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源