论文标题

提高图神经网络的表达性

Improving Expressivity of Graph Neural Networks

论文作者

Purgał, Stanisław

论文摘要

我们提出了一个比常用GNN具有更大表达能力的图形神经网络 - 不受限制地仅区分Weisfeiler-Lehman测试识别为非同构​​的图。我们使用一个带有扩展注意窗口的图形注​​意网络,该网络将节点汇总到遥远的节点。我们还使用部分随机的初始嵌入,允许在看起来相同的节点之间进行区分。这可能会导致传统的辍学机制问题,因此我们使用“头辍学”,随机忽略了一些注意力头,而不是嵌入的某些维度。

We propose a Graph Neural Network with greater expressive power than commonly used GNNs - not constrained to only differentiate between graphs that Weisfeiler-Lehman test recognizes to be non-isomorphic. We use a graph attention network with expanding attention window that aggregates information from nodes exponentially far away. We also use partially random initial embeddings, allowing differentiation between nodes that would otherwise look the same. This could cause problem with a traditional dropout mechanism, therefore we use a "head dropout", randomly ignoring some attention heads rather than some dimensions of the embedding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源