论文标题

Quackie:一项NLP分类任务,带有地面真相解释

QUACKIE: A NLP Classification Task With Ground Truth Explanations

论文作者

Rychener, Yves, Renard, Xavier, Seddah, Djamé, Frossard, Pascal, Detyniecki, Marcin

论文摘要

NLP的可解释性旨在提高对模型预测的信任。这使得评估可解释性方法成为紧迫的问题。有多个用于评估NLP解释性的数据集,但是他们对人的依赖提供了真理,提出了有关其无偏见的问题。在这项工作中,我们采用不同的方法,并通过转移提问数据集来制定特定的分类任务。对于此自定义分类任务,可解释性基础直接来自分类问题的定义。我们使用这种方法提出基准,并通过评估广泛的当前最新方法,为NLP解释性研究奠定基础。

NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源