论文标题

关于解释黑盒时间图神经网络的极限

On the Limit of Explaining Black-box Temporal Graph Neural Networks

论文作者

Vu, Minh N., Thai, My T.

论文摘要

由于其在与图形相关的任务建模的能力上,时间图神经网络(TGNN)最近受到了很多关注。与图神经网络类似,解释由于其黑盒性质而由TGNN做出的预测也是不足的。在GNN中解决此问题的一种主要方法是分析模型对模型输入的某些扰动的响应,称为基于扰动的解释方法。尽管这些方法不需要内部访问该模型,但由于它们缺乏内部访问是否会阻止它们揭示预测的一些重要信息?这项工作是在这个问题的动机上研究了某些基于扰动的解释方法的极限。特别是,通过构建一些特定的TGNN实例,我们表明(i)节点扰动无法可靠地识别执行预测的路径,(ii)边缘扰动在确定对预测有助于预测的所有节点和(iii)扰动的所有节点方面都不可靠,并不能相互地识别出临时范围的范围。

Temporal Graph Neural Network (TGNN) has been receiving a lot of attention recently due to its capability in modeling time-evolving graph-related tasks. Similar to Graph Neural Networks, it is also non-trivial to interpret predictions made by a TGNN due to its black-box nature. A major approach tackling this problems in GNNs is by analyzing the model' responses on some perturbations of the model's inputs, called perturbation-based explanation methods. While these methods are convenient and flexible since they do not need internal access to the model, does this lack of internal access prevent them from revealing some important information of the predictions? Motivated by that question, this work studies the limit of some classes of perturbation-based explanation methods. Particularly, by constructing some specific instances of TGNNs, we show (i) node-perturbation cannot reliably identify the paths carrying out the prediction, (ii) edge-perturbation is not reliable in determining all nodes contributing to the prediction and (iii) perturbing both nodes and edges does not reliably help us identify the graph's components carrying out the temporal aggregation in TGNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源