论文标题

NICGSLOWDOWN:评估神经图像标题生成模型的效率鲁棒性

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

论文作者

Chen, Simin, Song, Zihe, Haque, Mirazul, Liu, Cong, Yang, Wei

论文摘要

神经图像标题产生(NICG)模型由于视觉理解的出色表现而受到了研究社区的极大关注。现有的工作重点是提高NICG模型的准确性,而效率则较少。但是,许多现实世界应用都需要实时反馈,这高度依赖于NICG模型的效率。最近的研究观察到,NICG模型的效率可能会因不同的输入而有所不同。该观察结果带来了NICG模型的新攻击表面,即,对手可能能够稍微更改输入,从而导致NICG模型消耗更多的计算资源。为了进一步了解这种面向效率的威胁,我们提出了一种新的攻击方法NICGSLOWDOWN,以评估NICG模型的效率鲁棒性。我们的实验结果表明,NICGSLOWDOWN可以生成具有人类无关扰动的图像,这将使NICG模型潜伏期最高483.86%。我们希望这项研究能够提高社区对NICG模型效率鲁棒性的关注。

Neural image caption generation (NICG) models have received massive attention from the research community due to their excellent performance in visual understanding. Existing work focuses on improving NICG model accuracy while efficiency is less explored. However, many real-world applications require real-time feedback, which highly relies on the efficiency of NICG models. Recent research observed that the efficiency of NICG models could vary for different inputs. This observation brings in a new attack surface of NICG models, i.e., An adversary might be able to slightly change inputs to cause the NICG models to consume more computational resources. To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models. Our experimental results show that NICGSlowDown can generate images with human-unnoticeable perturbations that will increase the NICG model latency up to 483.86%. We hope this research could raise the community's concern about the efficiency robustness of NICG models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源