论文标题
自我注意归因:解释变压器内部的信息交互
Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
论文作者
论文摘要
基于变压器的模型的巨大成功受益于强大的多头自发机制,该机制可以从输入中学习令牌依赖的依赖并编码上下文信息。先前的工作旨在将模型决策归因于具有不同显着性措施的单个输入特征,但他们无法解释这些输入特征如何相互相互作用以达到预测。在本文中,我们提出了一种自我注意归因方法来解释变压器内部的信息相互作用。我们以伯特为例进行广泛的研究。首先,我们采用自我注意归因来确定重要的注意力头,而其他人则可以通过边缘性能退化来修剪。此外,我们在每一层中提取最显着的依赖项来构造归因树,该树揭示了变压器内部的层次相互作用。最后,我们表明,归因结果可以用作对抗模式,以实施对伯特的非目标攻击。
The great success of Transformer-based models benefits from the powerful multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input. Prior work strives to attribute model decisions to individual input features with different saliency measures, but they fail to explain how these input features interact with each other to reach predictions. In this paper, we propose a self-attention attribution method to interpret the information interactions inside Transformer. We take BERT as an example to conduct extensive studies. Firstly, we apply self-attention attribution to identify the important attention heads, while others can be pruned with marginal performance degradation. Furthermore, we extract the most salient dependencies in each layer to construct an attribution tree, which reveals the hierarchical interactions inside Transformer. Finally, we show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.