论文标题

检查神经语言模型的修辞能力

Examining the rhetorical capacities of neural language models

论文作者

Zhu, Zining, Pan, Chuer, Abdalla, Mohamed, Rudzicz, Frank

论文摘要

最近,神经语言模型(LMS)在产生高质量的话语方面表现出了令人印象深刻的能力。尽管许多最近的论文已经分析了LMS中编码的句法方面,但迄今为止尚未分析句子间,修辞知识。在本文中,我们提出了一种定量评估神经LMS的修辞能力的方法。我们通过评估其能力编码一组源自修辞学结构理论(RST)的语言特征,来研究神经LMS的能力来理解话语的修辞。我们的实验表明,基于BERT的LMS的表现优于其他变压器LM,揭示了其中间层表示中更丰富的话语知识。此外,GPT-2和XLNET显然编码了较少的修辞知识,我们建议从语言哲学中进行解释。我们的方法显示了量化神经LMS的修辞能力的途径。

Recently, neural language models (LMs) have demonstrated impressive abilities in generating high-quality discourse. While many recent papers have analyzed the syntactic aspects encoded in LMs, there has been no analysis to date of the inter-sentential, rhetorical knowledge. In this paper, we propose a method that quantitatively evaluates the rhetorical capacities of neural LMs. We examine the capacities of neural LMs understanding the rhetoric of discourse by evaluating their abilities to encode a set of linguistic features derived from Rhetorical Structure Theory (RST). Our experiments show that BERT-based LMs outperform other Transformer LMs, revealing the richer discourse knowledge in their intermediate layer representations. In addition, GPT-2 and XLNet apparently encode less rhetorical knowledge, and we suggest an explanation drawing from linguistic philosophy. Our method shows an avenue towards quantifying the rhetorical capacities of neural LMs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源