论文标题

基于塔克分解的时间知识图完成

Tucker decomposition-based Temporal Knowledge Graph Completion

论文作者

Shao, Pengpeng, Yang, Guohua, Zhang, Dawei, Tao, Jianhua, Che, Feihu, Liu, Tong

论文摘要

知识图已被证明是众多智能应用程序的有效工具。但是,知识图中仍然隐含地存在大量有价值的知识。为了丰富现有的知识图,近年来见证了许多用于链接预测和知识图的算法嵌入了嵌入的算法来推断新事实。但是,这些研究中的大多数都集中在静态知识图上,而忽略了反映知识有效性的时间信息。开发时间知识图完成的模型是越来越重要的任务。在本文中,我们构建了一个新的张量分解模型,以通过订单4张量的塔克分解启发时间完成时间知识图完成。我们证明了所提出的模型具有完全表现力,并报告了几种公共基准的最先进结果。此外,我们提出了几种正则化方案,以改善策略并研究其对拟议模型的影响。对三个时间数据集(即ICEWS2014,ICEWS2005-15,GDELT)的实验研究证明了我们的设计合理,并证明我们的模型在链接预测任务上具有明确的余量。

Knowledge graphs have been demonstrated to be an effective tool for numerous intelligent applications. However, a large amount of valuable knowledge still exists implicitly in the knowledge graphs. To enrich the existing knowledge graphs, recent years witness that many algorithms for link prediction and knowledge graphs embedding have been designed to infer new facts. But most of these studies focus on the static knowledge graphs and ignore the temporal information that reflects the validity of knowledge. Developing the model for temporal knowledge graphs completion is an increasingly important task. In this paper, we build a new tensor decomposition model for temporal knowledge graphs completion inspired by the Tucker decomposition of order 4 tensor. We demonstrate that the proposed model is fully expressive and report state-of-the-art results for several public benchmarks. Additionally, we present several regularization schemes to improve the strategy and study their impact on the proposed model. Experimental studies on three temporal datasets (i.e. ICEWS2014, ICEWS2005-15, GDELT) justify our design and demonstrate that our model outperforms baselines with an explicit margin on link prediction task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源