论文标题
Nellie:用于接地,组成和可解释推理的神经符号推理引擎
NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning
论文作者
论文摘要
我们的目标是一种现代的方法,可以通过系统的推理来回答问题,在这种推理中,答案得到了以权威事实为基础的人类可解释的证明树。这样的系统将有助于减轻现代LM的可解释性和幻觉的挑战,以及缺乏当前解释方法的基础(例如,经过思考链)。本文提出了对基于序言的推理引擎的新看法,我们在其中用神经语言建模,指导生成和半摩托仪的密集检索替换了手工制作的规则。我们的实施Nellie是第一个展示完全可解释的端到端质量质量质量质量搜索的系统,它超越了早期的工作,解释了文本中已知的真实事实。在实验中,内莉(Nellie)的表现胜过类似大小的最先进的推理者[Tafjord等,2022],同时产生知识的解释。我们还发现Nellie可以利用半结构化和NL文本语料库来指导推理。这些共同提出了一种新的方式,可以共同获得现代神经方法和传统象征性推理的好处。
Our goal is a modern approach to answering questions via systematic reasoning where answers are supported by human interpretable proof trees grounded in an NL corpus of authoritative facts. Such a system would help alleviate the challenges of interpretability and hallucination with modern LMs, and the lack of grounding of current explanation methods (e.g., Chain-of-Thought). This paper proposes a new take on Prolog-based inference engines, where we replace handcrafted rules with a combination of neural language modeling, guided generation, and semiparametric dense retrieval. Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA as entailment tree proof search, going beyond earlier work explaining known-to-be-true facts from text. In experiments, NELLIE outperforms a similar-sized state-of-the-art reasoner [Tafjord et al., 2022] while producing knowledge-grounded explanations. We also find NELLIE can exploit both semi-structured and NL text corpora to guide reasoning. Together these suggest a new way to jointly reap the benefits of both modern neural methods and traditional symbolic reasoning.