论文标题

看到生活?重新考虑深层时代面部透露率验证的安全性

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

论文作者

Li, Changjiang, Wang, Li, Ji, Shouling, Zhang, Xuhong, Xi, Zhaohan, Guo, Shanqing, Wang, Ting

论文摘要

面部运动验证(FLV)被广泛用于许多安全敏感域中的身份认证,并由领先的云供应商作为平台即服务(PAAS)提供。然而,随着合成媒体技术的迅速发展(例如,深击),FLV的安全面临着前所未有的挑战,到目前为止,几乎没有知道这一点。 为了弥合这一差距,在本文中,我们对现实世界中FLV的安全性进行了首次系统研究。具体来说,我们介绍了LiveBugger,这是一种新的DeepFake攻击框架,可实现FLV的可定制,自动化的安全评估。利用LiveBugger,我们对代表性FLV平台进行了全面的经验评估,从而导致了一系列有趣的发现。例如,大多数FLV API不使用抗深膜检测。即使对于那些具有这种防御能力的人,他们的有效性也令人担忧(例如,它可以检测到高质量的合成视频,但无法检测到低质量的视频)。然后,我们对影响LiveBugger攻击性能的因素进行了深入的分析:a)可以利用FLV中的偏见(例如性别或种族)来选择受害者; b)对抗性训练使深泡料更有效地绕过FLV; c)输入质量对绕过FLV的不同深层技术具有不同的影响。根据这些发现,我们提出了一种定制的两阶段方法,可以将攻击成功率提高高达70%。此外,我们对FLV的几种代表性应用(即FLV API的客户)进行了概念验证攻击,以说明实际含义:由于API的脆弱性,许多下游应用程序很容易受到Deepfake的影响。最后,我们讨论了提高FLV安全性的潜在对策。我们的发现已得到相应的供应商的确认。

Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors. Yet, with the rapid advances in synthetic media techniques (e.g., deepfake), the security of FLV is facing unprecedented challenges, about which little is known thus far. To bridge this gap, in this paper, we conduct the first systematic study on the security of FLV in real-world settings. Specifically, we present LiveBugger, a new deepfake-powered attack framework that enables customizable, automated security evaluation of FLV. Leveraging LiveBugger, we perform a comprehensive empirical assessment of representative FLV platforms, leading to a set of interesting findings. For instance, most FLV APIs do not use anti-deepfake detection; even for those with such defenses, their effectiveness is concerning (e.g., it may detect high-quality synthesized videos but fail to detect low-quality ones). We then conduct an in-depth analysis of the factors impacting the attack performance of LiveBugger: a) the bias (e.g., gender or race) in FLV can be exploited to select victims; b) adversarial training makes deepfake more effective to bypass FLV; c) the input quality has a varying influence on different deepfake techniques to bypass FLV. Based on these findings, we propose a customized, two-stage approach that can boost the attack success rate by up to 70%. Further, we run proof-of-concept attacks on several representative applications of FLV (i.e., the clients of FLV APIs) to illustrate the practical implications: due to the vulnerability of the APIs, many downstream applications are vulnerable to deepfake. Finally, we discuss potential countermeasures to improve the security of FLV. Our findings have been confirmed by the corresponding vendors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源