论文标题

对比度多视图表示图表

Contrastive Multi-View Representation Learning on Graphs

论文作者

Hassani, Kaveh, Khasahmadi, Amir Hosein

论文摘要

我们通过对比图的结构视图来介绍一种学习节点和图形级别表示的自我监督方法。我们表明,与视觉表示学习不同,将视图数量增加到两个以上或对比的多尺度编码并不能提高性能,并且通过对比一阶邻居的编码和图形扩散来实现最佳性能。我们在线性评估协议下,在8个节点和图形分类基准的8个节点和图形分类基准中,我们实现了新的最先进的结果。例如,在CORA(节点)和Reddit-Binary(Graph)分类基准上,我们的精度达到86.8%和84.5%,比以前的最先进的相对改进为5.5%和2.4%。与受监督的基线相比,我们的方法在8个基准中的4个基准中的表现优于它们。源代码在以下网址发布:https://github.com/kavehhassani/mvgrl

We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks. Source code is released at: https://github.com/kavehhassani/mvgrl

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源