We also evaluate four possible defenses against our attacks and find that they are not effective, which motivates future work on better defenses. 主要贡献如上。 2 Background 这部分讲了Horizontal Federated Learning和
2⃣️ Active Label Inference Attack with theMalicious Local Optimizer 主动攻击:攻击方是Passive Party(malicious) Method:在VFL中,attacker (Passive Party) 可以通过恶意修改优化器,加快bottom model梯度下降的速度,获得当前轮Top model参数优化时的优先权,使得attacker的本地bottom model能够间接获得“更倾向的“...
This repository contains the code for paper "Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks". - WWlnZSBMaXU/Attackers-Are-Not-the-Same
Label flip61.0%30.8% Table 8: OSLO’sperformanceagainst target models trained with adversarial training onCIFAR-10. ModelThresholdτTest ACC (%)Training time (min)Attack (%) ModelThresholdτTest ACC (%)Training time (min)TPRFPR ResNet18 (no adversarial training)0.0182.6237.80.9 ...
In this article, for the first time, we propose the label-only membership inference attacks against semantic segmentation models. With a well-designed framework of the attacks, we can achieve a considerably higher successful attacking rate compared to previous work. In addition, we have discussed ...
To bridge this gap, we propose the first defense for split-learning-based VFL against label inference attacks. The experimental results across multiple datasets and model architectures demonstrate the efficacy of our defense in reducing the label inference accuracy of the malicious party, while ...
However, recent research (USENIX Security'22) reveals that it is still possible to conduct label inference attacks in VFL, in which attacker can acquire privately owned labels of other participants; even VFL constructed with model splitting (the kind of VFL architecture with higher security ...
Extensive experiments show that our global adaptive MIA outperforms current label-only membership inference attacks in the CIFAR10 and CIFAR100 datasets, especially for the true positive rate at low false positive rates metric.Xu, JiachengTongji UniversityHu, Jianpeng...
评估了四种可能的防御方法,包括Noisy Gradients、Gradient Compression、Privacy-preserving deep learning和离散SGD。 发现其中一些防御措施可以缓解直接标签推断攻击,但对被动和主动标签推断攻击未必有效。 Why leakage 底部模型训练中的泄露(Intermediate Result):VFL需要每个参与者训练一个本地的底部模型来将输入特征嵌入到潜...
However, existing federated learning is also susceptible to privacy breaches, e.g., label inference attacks against vertical federated learning scenarios, where an adversary is able to reason about the labels of other participants based on the trained model, leading to serious privacy breaches. In ...