论文标题
通过共同信息最大化的无监督句子嵌入方法
An Unsupervised Sentence Embedding Method by Mutual Information Maximization
论文作者
论文摘要
对于句子对任务(例如聚类或语义搜索)的效率低下,因为它需要评估组合的许多句子对,这非常耗时。句子Bert(Sbert)试图通过学习单个句子的语义有意义的表示来解决这一挑战,从而可以轻松访问相似性比较。但是,Sbert接受了具有高质量标记句子对的语料库的培训,这将其应用限制为标记数据极为稀缺的任务。在本文中,我们提出了基于相互信息最大化策略的新颖自我监督的学习目标,并提出了一个轻巧的扩展,以无监督的方式得出有意义的句子嵌入。与Sbert不同,我们的方法不受标记数据的可用性的限制,因此可以将其应用于不同领域特定的语料库。实验结果表明,该提出的方法显着超过了其他无监督的句子,将基准嵌入了常见语义文本相似性(STS)任务和下游监督任务。它还在没有可用的标记数据的环境中胜过Sbert,并以各种任务的监督方法实现绩效竞争。
BERT is inefficient for sentence-pair tasks such as clustering or semantic search as it needs to evaluate combinatorially many sentence pairs which is very time-consuming. Sentence BERT (SBERT) attempted to solve this challenge by learning semantically meaningful representations of single sentences, such that similarity comparison can be easily accessed. However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce. In this paper, we propose a lightweight extension on top of BERT and a novel self-supervised learning objective based on mutual information maximization strategies to derive meaningful sentence embeddings in an unsupervised manner. Unlike SBERT, our method is not restricted by the availability of labeled data, such that it can be applied on different domain-specific corpus. Experimental results show that the proposed method significantly outperforms other unsupervised sentence embedding baselines on common semantic textual similarity (STS) tasks and downstream supervised tasks. It also outperforms SBERT in a setting where in-domain labeled data is not available, and achieves performance competitive with supervised methods on various tasks.