QEBA在HSJA基础上,改动了梯度估计时的子空间实现加速,具体分为以上三种采样方式。 QEBA-S (Spatial),利用图像spatial domain的局部相似性,在小尺度\lfloor N/r\rfloor\times\lfloor N/r\rfloor上定义每一个位置为1,其他位置为0的一组basis,然后双线性插值到原N\times N空间上做一组basis,然后采样维度为basi...
本篇文章的工作目标是提高对抗样本的迁移性(improving the transferability of adversarial examples or generate transferable adversarial examples)。迁移性对抗攻击是黑盒对抗攻击方法的一种,其主要利用了对抗样本的迁移性,在替代模型上利用白盒对抗攻击方式生成对抗样本,期望能够成功攻击未知的黑盒模型。
Frequency domain transformDeep neural networks (DNNs) are vulnerable to being attacked by adversarial examples, leading to DNN misclassification. Perturbations in adversarial examples usually exist in the form of noise. In this paper, we proposed a lightweight joint contrastive learning and frequency ...
It innovatively applies a grid mask to generate adversarial samples within the frequency domain. This transformation diminishes the spatial correlation among the image pixels and offers a fresh perspective for enhancing the transferability of adversarial examples. Experiments on adversarial attacks using the...
Nevertheless, all these methods are implemented in the time domain without the time-frequency transformation and have scarcely considered that adversarial examples often involve high-frequency phenomena [22]. Therefore, regulating adversarial training in the frequency domain is essential to boost adversarial...
Examples of different regions are shown in Figure 4 (c) - (l). We only consider the facial features because most deepfake arts focus on them, and they convey the most information in a facial image; Tg: The reference number Tg ∈ {0, ·...
可参考:https://nicholas.carlini.com/code/audio_adversarial_examples 如上,与图片攻击类似,加上一段杂音,让神经网络错误判断。 Attacks on ASV 如上,语音识别的分类问题同理,也可通过加噪音攻击。 Wake Up Words Hidden Voice Attack 如上,助教播放了一段杂音,实际上代表的是"turn on the computer"。
the community of deep learning researchers, especially those interested in image classification. It has also been observed that GAN models are relevant in domains other than image classification. Some examples are image-to-image translation, which is often applied to translating satellite photographs to...
Learning Transferable Adversarial Examples via Ghost Networks (AAAI 2020) 除了对多个模型的集成,单个模型的自我集成也是有效的:本文提出的Ghost Networks是基于单个模型,通过增加Dropout、扰动残差连接(Skip Connection)等操作生成多个模型,集成这些模型来生成对抗样本,从而提升对抗样本的迁移性。
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs Robust Learning against Relational Adversaries On the Tradeoff Between Robustness and Fairness ...