论文标题
SPAN-CONVERVE:对话框进行对话框的射击跨度很少
Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations
论文作者
论文摘要
我们介绍了Span-Convert,这是一种用于对话插槽填充的轻重量模型,将任务框架为基于转弯的跨度提取任务。这种表述允许简单地整合在大型审计的会话模型中编码的对话知识(例如convert依(Henderson等,2019)。我们表明,在Span-Convert中利用此类知识对于几种学习方案特别有用:我们报告一致的收益超过1)一个跨度提取器,该跨度提取器在目标域中从头开始训练表示表示,以及2)基于BERT的跨度提取器。为了激发有关插槽填充任务的跨度提取的更多工作,我们还发布了8K餐厅,这是一个新的挑战性数据集,其中包括8,198台说话,并根据餐厅预订域中的实际对话汇编而成。
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task. This formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as ConveRT (Henderson et al., 2019). We show that leveraging such knowledge in Span-ConveRT is especially useful for few-shot learning scenarios: we report consistent gains over 1) a span extractor that trains representations from scratch in the target domain, and 2) a BERT-based span extractor. In order to inspire more work on span extraction for the slot-filling task, we also release RESTAURANTS-8K, a new challenging data set of 8,198 utterances, compiled from actual conversations in the restaurant booking domain.