GraphMask:Interpreting graph neural networks for NLP with differentiable edge masking. ICLR'21 但是,在网络安全场景下,作者认为一个好的GNN可解释性方法需要满足如下几个条件: 解释得全面:可以给图中的节点、边和属性都做出解释; 解释得准确:能够确定出有助于得出原始预测结果的重要节点/边/属性等元素; GNN模型...
Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-ho...
Official implementation of AAAI'22 paper "ProtGNN: Towards Self-Explaining Graph Neural Networks" (https://arxiv.org/abs/2112.00911) The code is based on the Pytorch implementation of [[DIG]](https://github.com/divelab/DIG) Requirements pytorch 1.8.0 torch-geometric 2.0.2 Usage Download ...
在GNN中使用多层的网络会出现过度平滑的问题(over-smoothing),过度平滑即是不同类的点特征趋近于相同,导致无法分类。 出现过度平滑的问题,主要是由于representation transformation 和 propagation的entanglement(纠缠) Analysis of Deep GNNS 定量的分析节点特征的平滑值 SMVgSMVg就是整张图的平滑度值,SMVgSMVg越大,平滑...
The graph GG can be further regarded as a Markov chain, whose transition matrix PP is ˆA⊕A^⊕ . This Markov chain is irreducible and aperiodic because the graph GG is connected and self-loops are included in the connectivity. If a Markov chain is irreducible and aperiodic, then limk→...
In this work we address this issue by extracting the knowledge acquired by trained Deep Neural Networks (DNNs) and representing this knowledge in a graph. The proposed graph encodes statistical correlations between neurons' activation values in order to expose the relationship between neurons in the...
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information, which have achieved promising performance on many graph tasks. However, GNNs are mostly treated as black-boxes and lack human intelligible explanations. Thus, they cannot be fully trusted and used in...
Evaluation and visualization are made universal for everyexplainer. After explaining a single graph, the pair(graph, edge_imp:np.ndarray)is saved asexplainer.last_resultby default, which is then evaluated or visualized. ratios=[0.1*iforiinrange(1,11)]acc_auc=refine.evaluate_acc(ratios).mean(...
Similar to the encoder, the feature decoder also consists of a series ofBEANConvlayers, custom convolution layers, that performs convolution operations to capture relationships between nodes. The code snippet from GraphBEAN shows this: def create_feature_decoder(self): ...
where H is the matrix of activation for the l-th or l+1-th layer, σ is an activation function like ReLu, D is the graph degree matrix, A the self-connected adjacency matrix and W is the layer-specific trainable weight matrix. From this basic calculation, Atwood and ...