论文标题

wnut-2020任务2:使用预训练的语言模型的cia_nitt分类

CIA_NITT at WNUT-2020 Task 2: Classification of COVID-19 Tweets Using Pre-trained Language Models

论文作者

Babu, Yandrapati Prakash, Eswari, Rajagopal

论文摘要

本文介绍了我们针对Wnut 2020共享任务的模型。共享任务2涉及识别COVID-19相关的信息推文。我们将其视为二进制文本分类问题,并对预训练的语言模型进行实验。我们的第一个模型基于CT-Bert,达到了88.7%和第二个模型的F1得分,该模型是CT-BERT,ROBERTA和SVM的合奏,其F1得分达到了88.52%。

This paper presents our models for WNUT 2020 shared task2. The shared task2 involves identification of COVID-19 related informative tweets. We treat this as binary text classification problem and experiment with pre-trained language models. Our first model which is based on CT-BERT achieves F1-score of 88.7% and second model which is an ensemble of CT-BERT, RoBERTa and SVM achieves F1-score of 88.52%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源