However, current adversarial attack methods on GNNs neglect the characteristics and applications of the distributed scenario, leading to suboptimal performance and inefficiency in attacking distributed GNN trai
^https://github.com/YingtongDou/graph-adversarial-learning-literature 图神经网络(GNN)
We first present the general design pipeline for designing a GNN model in this section. Then we give details of each step such as selecting computational modules, considering graph type and scale, and designing loss function in Section 3 Instantiations of computational modules, 4 Variants ...
For instance, Gated GNN [47] employs parameterized function in the form of a gated recurrent unit (GRU) [48] to update the node representation. This function considers both the hidden state of the neighboring nodes and the node’s previous hidden state. After updating the hidden state, it ...
内容涉及节点表示学习、知识图谱表示学习、图神经网络介绍、图神经网络应用、图生成以及可视化相关的最新论文列,还收集了目前流行的开源GNN平台。 本文内容整理自网络,源地址:https://github.com/DeepGraphLearning/LiteratureDL4Graph 目录 1.节点表示学习 1.1无监督节点表示学习 ...
Yin X, Lin W, Sun K, Wei C, Chen Y (2023) A2s2-GNN: Rigging GNN-based social status by adversarial attacks in signed social networks. IEEE Trans Inf Forensics Secur 18:206–220 3. Pornprasit C, Liu X, Kiattipadungkul P, Kertkeidkachorn N (2022) Enhancing citation recommendation ...
2024 Revisiting Attack-caused Structural Distribution Shift in Graph Anomaly Detection IEEE TKDE 2024 Link Link 2024 F2GNN: An Adaptive Filter with Feature Segmentation for Graph-Based Fraud Detection ICASSP 2024 Link Link 2024 RAGFormer: Learning Semantic Attributes and Topological Structure for Fraud ...
GNN自去年起,一直是研究的热点,图神经网络相关的关键词频繁出现在今年各大AI顶会论文title中,加深对这一领域的了解是非常必要的。分享一篇,关于GNN,目前看到的整理得最细致的资源列表。 内容涉及节点表示学习、知识图谱表示学习、图神经网络介绍、图神经网络应用、图生成以及可视化相关的最新论文列,还收集了目前流行的...
HebCGNN: Hebbian-enabled causal classification integrating dynamic impact valuing 2025, Knowledge-Based Systems Show abstract Classifying graph-structured data presents significant challenges due to the diverse features of nodes and edges and their complex relationships. While Graph Neural Networks (GNNs) ar...
然而,最近的研究表明,GNN 容易受到精心设计的扰动,称为对抗性攻击。对抗性攻击很容易欺骗 GNN 对下游任务进行预测。对抗性攻击的脆弱性引起了人们对在安全关键型应用中应用 GNN 的担忧。因此,开发鲁棒的算法来防御对抗性攻击具有重要意义。防御对抗性攻击的一个自然想法是清理扰动图。很明显,现实世界的图共享一些内在...