论文标题:Self-Attention Graph Pooling 论文作者:Junhyun Lee, Inyeop Lee, Jaewoo Kang 论文来源:2019, ICML 论文地址:download 论文代码:download 1 Preamble 对图使用下采样 downsampling (pooling)。 2 Introduction 图池化三种类型: Topology based pooling; ...
Graph pooling输入的Graph根据Fig 1中提到的mask操作一样操作 其中X_{idx}是一个row-wise(i.e. node-wise)indexed feature matri,\odot是broadcasted elementwise product【将Z按照列扩增,然后点乘】,A_{idx,idx}是row-wise以及col-wise indexed邻接矩阵,X_{out}以及A_{out}是新的feature matrix以及对应的邻...
本文介绍的论文是《Self-Attention Graph Pooling》。 该篇文章提出了一种新的图池化方式,该池化方式是基于自注意力机制来计算得分,同时考虑了节点特征和图的拓扑结构,能够实现端到端方式学习图的层次表示。 🍁一、背景 近几年图神经网络已被广泛应用于各个领域,并且表现出了很好的性能,但是对于图进行采样操作仍是...
而为了继续提高Graph pooling,我们通过SAGPool使用拓扑结构和特征来更简单的学习层级表示。 二.模型 SAGPool的核心在于使用GNN计算self-attention,SAGPool层如下图: An illustration of the SAGPool layer. Self-attention mask 具体使用图卷积计算self-attention值,以Semi-Supervised Classification with Graph Convolutional ...
2.2 Graph Pooling Pooling layer让CNN结构能够减少参数的数量【只需要卷积核内的参数】,从而避免了过拟合,为了使用CNNs,学习GNN中的pool操作是很有必要的,Graph pool的方法主要为三种:topology based,global,hierarchical pooling。 Topology based pooling。早先的工作使用Graph coarsening 算法,而不是神经网络。谱聚类方...
不需要计算梯度的代码块用 with torch.no_grad() 包含起来。 model.eval() 和 torch.no_grad() 的区别在于,model.eval() 是将网络切换为测试状态,例如 BN 和dropout在训练和测试阶段使用不同的计算方法。torch.no_grad() 是关闭 PyTorch 张量的自动求导机制,以减少存储使用和加速计算,得到的结果无法进行 loss...
Self-Attention Graph Pooling is a method that moves the framework of deep learning to structured data, addressing the challenge of downsampling in graphs. The method introduces a self-attention mechanism to pool nodes in graphs, considering both node features and graph topology. This ...
Pooling 第五部分训练 首先通过前向传播运行模型以计算其预测的标签分布,然后进行反向传播误差。注意图...
Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our...
(input_x, output_Tr)#torch.Size([31, 17, 19])#sum poolinggraph_embeddings = torch.spmm(graph_pool, output_Tr)#torch.Size([2, 31])graph_embeddings = self.dropouts[layer_idx](graph_embeddings)# Produce the final scoresprediction_scores += self.predictions[layer_idx](graph_embeddings)...