论文标题

指导调整几乎没有基于方面的情感分析

Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis

论文作者

Varia, Siddharth, Wang, Shuai, Halder, Kishaloy, Vacareanu, Robert, Ballesteros, Miguel, Benajiba, Yassine, John, Neha Anna, Anubhai, Rishita, Muresan, Smaranda, Roth, Dan

论文摘要

基于方面的情感分析(ABSA)是一项精细的情感分析任务,涉及用户生成的文本中的四个要素:方面术语,方面类别,意见术语和情感极性。大多数计算方法都侧重于使用管道或关节建模方法的一些ABSA子任务,例如元组(方面,情感极性)或三重态(方面术语,意见项,情感极性)提取。最近,已经提出了生成方法将所有四个元素提取为(一个或多个)四元素作为单个任务。在这项工作中,我们进一步迈出了一步,并提出了一个统一的框架来解决ABSA,以及相关的子任务,以在几次射击方案中提高性能。为此,我们在多任务学习时尚中对T5模型进行了调整,以涵盖所有子任务以及整个四倍的预测任务。在使用多个基准数据集的实验中,我们表明所提出的多任务提示方法在少数拍摄的学习设置中带来了性能提升(绝对是8.29 F1)。

Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts: aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-tasks such as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源