论文标题

通过噪声治理对图形神经网络的强大训练

Robust Training of Graph Neural Networks via Noise Governance

论文作者

Qian, Siyi, Ying, Haochao, Hu, Renjun, Zhou, Jingbo, Chen, Jintai, Chen, Danny Z., Wu, Jian

论文摘要

图神经网络(GNN)已成为半监督学习的广泛使用的模型。然而,在存在标签噪声的情况下,GNN的鲁棒性仍然很大程度上探索了问题。在本文中,我们考虑了一个重要但充满挑战的场景,即图表上的标签不仅嘈杂,而且稀缺。在这种情况下,由于标签噪声传播和学习不足,GNNS的性能很容易降解。为了解决这些问题,我们提出了一种新颖的RTGNN(通过噪声治理对图神经网络进行了强大的训练)框架,该框架通过学习明确控制标签噪声来实现更好的鲁棒性。更具体地说,我们将自我强化和一致性正规化作为补充监督。自我强化的监督灵感来自深度神经网络的记忆效应,并旨在纠正嘈杂的标签。此外,一致性正则化可阻止GNN通过模仿视图和视角视角中的模仿损失而过度拟合到嘈杂的标签。为了利用此类监督,我们将标签分为干净嘈杂的类型,纠正不准确的标签,并在未标记的节点上进一步生成伪标记。然后自适应选择具有不同类型标签的节点的监督。这可以从干净的标签中进行足够的学习,同时限制嘈杂的标签的影响。我们进行了广泛的实验,以评估RTGNN框架的有效性,结果验证了其一致的优于最先进的方法,并具有两种类型的标签噪声和各种噪声速率。

Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning. However, the robustness of GNNs in the presence of label noise remains a largely under-explored problem. In this paper, we consider an important yet challenging scenario where labels on nodes of graphs are not only noisy but also scarce. In this scenario, the performance of GNNs is prone to degrade due to label noise propagation and insufficient learning. To address these issues, we propose a novel RTGNN (Robust Training of Graph Neural Networks via Noise Governance) framework that achieves better robustness by learning to explicitly govern label noise. More specifically, we introduce self-reinforcement and consistency regularization as supplemental supervision. The self-reinforcement supervision is inspired by the memorization effects of deep neural networks and aims to correct noisy labels. Further, the consistency regularization prevents GNNs from overfitting to noisy labels via mimicry loss in both the inter-view and intra-view perspectives. To leverage such supervisions, we divide labels into clean and noisy types, rectify inaccurate labels, and further generate pseudo-labels on unlabeled nodes. Supervision for nodes with different types of labels is then chosen adaptively. This enables sufficient learning from clean labels while limiting the impact of noisy ones. We conduct extensive experiments to evaluate the effectiveness of our RTGNN framework, and the results validate its consistent superior performance over state-of-the-art methods with two types of label noises and various noise rates.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源