3.Exphormer: Sparse Transformers for Graphs 4.Simplifying and Empowering Transformers for Large-Graph Representations 5.DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion 6.GraphGPS: General Powerful Scalable Graph Transformers 7.Structure-Aware Transformer for Graph Representatio...
论文标题:NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs. 论文链接:https://openreview.net/pdf?id=8KYeilT3Ow 图1 作者信息 此文发表于ICLR2023,是一篇将Transformer推广到大图上的工作。此前绝大多数的Graph Transformer将整个图作为输入,由于注意力机制平方计算复杂度的影响,...
(GROVER) Self-Supervised Graph Transformer on Large-Scale Molecular Data (github.com/tencent-aila) (GT) A Generalization of Transformer Networks to Graphs (github.com/graphdeeplea) GraphiT: Encoding Graph Structure in Transformers [Code is unavailable] (GraphTrans) Representing Long-Range Context ...
图上不同的 transformers 的主要区别在于(1)如何设计 PE,(2)如何利用结构信息(结合 GNN 或者利用结构信息去修正 attention score, etc)。 现有的方法基本都针对 small graphs(最多几百个节点),Graph-BERT 虽然针对节点分类任务,但是首先会通过 sampling 得到子图,这会损害...
A Generalization of Transformer Networks to Graphs (DLG-AAAI 2021) https://arxiv.org/abs/2012.09699 ▲ 模型结构 主要提出使用 Laplacian eigenvector 作为 PE,比 GraphBERT 中使用的 PE 好。 ▲ 不同 PE 的效果比较 但是该模型的效果在 self-attention 只关注 neighbors 的时候会更好,与其说是 graph tr...
GOAT: A Global Transformer on Large-scale Graphs. ICML 2023. [paper] EXPHORMER: Sparse Transformers for Graphs. ICML 2023. [paper] KDLGT: A Linear Graph Transformer Framework via Kernel Decomposition Approach. IJCAI 2023. [paper] Gapformer: Graph Transformer with Graph Pooling for Node Classific...
3. Key Design Aspects for Graph Transformer We find that attention using graph sparsity and positional encodings are two key design aspects for the generalization of transformers to arbitrary graphs. Now, we discuss these from the contexts of both NLP and graphs to make the proposed extensions cle...
GraphiT: Encoding Graph Structure in Transformers (arXiv 2021) https://arxiv.org/abs/2106.05667 该工作表明,将结构和位置信息合并到 transformer 中,能够优于现有的经典 GNN。GraphiT(1)利用基于图上的核函数的相对位置编码来影响 attention scores,(2)并编码出 local sub-structures 进行利用。实现发现,无论...
[Arxiv 2021] GraphiT: Encoding Graph Structure in Transformers 该工作表明,将结构和位置信息合并到transformer中,能够优于现有的经典GNN。 GraphiT:(1)利用基于图上的核函数的相对位置编码来影响attention scores,(2)并编码出local sub-structures进行利用。实现发现,无论将这种方法单独使用,还是结合起来使用都取得了...
Image-to-graph transformers can effectively encodeimage information in graphs but are typically difficult to trainand require large annotated datasets. Contrastive learning can increase data efficiency by enhancing feature representations, but existing methods are not applicable to graph labels because they ...