In the current chapter, we overview the existing researches on graph adversarial attacks. In particular, we briefly summarize and classify the existing graph adversarial attack methods, e.g., heuristic, gradient and reinforcement learning, and then choose several classic adversarial attack methods on ...
如果直接对目标节点v0进行修改,则称为直接攻击(direct attack);反之,对v0节点以外的节点进行修改,间接地影响v0的分类结果,称为影响者攻击(influencer attack)。 基于图的对抗性攻击模型可定义为下式(式1),其中Z∗是GCN分类模型,θ∗是根据扰动后的图训练得到的参数,(A′,X′)≈(A,X)用于确保扰动是不明...
本文介绍相关的三篇文章:Adversarial Attacks on Neural Networks for Graph Data[1]、 Adversarial Attacks on Graph Neural Networks via Meta Learning[2]和Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective[3]。 一、图对抗攻击 由于深度神经网络强大的表示学习能力,近几年它在...
技术标签:Paper NotesAttack Exploratory Adversarial Attacks on Graph Neural Networks 依赖training loss的最大梯度的这种基于梯度的策略,在攻击GNN模型时候,可能不会产生一个好的结果。 原因在于图结构的离散的特点。 ⇓ \Downarrow ⇓ 我们可不可以推导出一种有效的方法,来选择攻击GNN的扰动? 我们提出一种新颖...
【Paper-Attack】Exploratory Adversarial Attacks on Graph Neural Networks Exploratory Adversarial Attacks on Graph Neural Networks 依赖training loss的最大梯度的这种基于梯度的策略,在攻击GNN模型时候,可能不会产生一个好的结果。 原因在于图结构的离散的特点。 ⇓ \Downarrow ⇓ 我们可不可以推导出一种有效的...
Scalable Attack on Graph Data by Injecting Vicious Nodes 📝ECML-PKDD Model AFGSM Algorithm Gradient Surrogate GCN Target Task Node Classification Target Model GCN, GAT, DeepWalk Baseline Nettack, FGSM, Metattack Metric Accuracy Dataset CiteSeer, Cora, DBLP, Pubmed, Reddit Adversarial Attack on Hi...
[33] sought to attack face recognition systems by optimizing the color of sunglasses appearing in an image. Xian et al. [43] also proposed a deep-architecture-based adversarial attack method against link prediction methods on graph data. In addition, Huang et al. [18] developed an adversarial...
Since the GNN is trained based on the node's features and the connection relationship between the nodes, an attacker can add a small amount of error information to the training data in order to attack the GNN. Typically, adversarial attacks can be categorized into three types: white-box attac...
Then, new geometries are sampled by performing an adversarial attack on the ground-state conformation, and later evaluated using DFT. After training a new committee with newly sampled data points, the landscape of conformations is analyzed and compared with random displacements. Figure 4a shows a ...
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data". - GitHub - danielzuegner/nettack: Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".