论文标题
使用语言模型平滑需要图形
Smoothing Entailment Graphs with Language Models
论文作者
论文摘要
在语料库中自然语言谓词的多样性和Zipfian频率分布导致零用图(EGS)在开放关系提取(ore)中构建的稀疏性。例如,自然语言推断的计算高效且可解释的模型,但是作为符号模型,如果在测试时间缺少新的前提或假设顶点,它们会失败。我们提出了在符号模型中克服这种稀疏性的理论和方法。首先,我们通过构建及物链引入了EG的最佳平滑理论。然后,我们使用现成的语言模型展示了一种有效,开放域和无监督的平滑方法,以找到缺失的前提谓词的近似值。这将召回率提高了25.1和16.3个百分点,同时提高了平均精度并维护模型的解释性。此外,在质量检查任务中,我们表明平滑对于用较少的支撑文本回答问题最有用,而缺少前提谓词更昂贵。最后,用WordNet进行的受控实验证实了我们的理论,并表明假设平滑很困难,但原则上可能是可能的。
The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to sparsity in Entailment Graphs (EGs) built by Open Relation Extraction (ORE). EGs are computationally efficient and explainable models of natural language inference, but as symbolic models, they fail if a novel premise or hypothesis vertex is missing at test-time. We present theory and methodology for overcoming such sparsity in symbolic models. First, we introduce a theory of optimal smoothing of EGs by constructing transitive chains. We then demonstrate an efficient, open-domain, and unsupervised smoothing method using an off-the-shelf Language Model to find approximations of missing premise predicates. This improves recall by 25.1 and 16.3 percentage points on two difficult directional entailment datasets, while raising average precision and maintaining model explainability. Further, in a QA task we show that EG smoothing is most useful for answering questions with lesser supporting text, where missing premise predicates are more costly. Finally, controlled experiments with WordNet confirm our theory and show that hypothesis smoothing is difficult, but possible in principle.