Deep learning models suffer from a phenomenon called adversarial attacks: we can apply minor changes to the model input to fool a classifier for a particular example. The literature mostly considers adversarial attacks on models with images and other structured inputs. However, the adversarial ...
Decoupling Direction and Norm for Efficient Gradient-Based Adversarial Attacks and Defenses 说在前面 1.提出的问题 2.提出的方法 2.1 相关工作 2.2 算法介绍 3.实验结果 3.1 Untargeted Attack 3.2 Targeted Attack 3.3 Defense Evaluation 4.结论 Decoupling Direction and Norm for Efficient Gradient-Based L2 Ad...
As adversarial attacks pose a serious threat to the security of AI system in practice, such attacks have been extensively studied in the context of computer vision applications. However, few attentions have been paid to the adversarial research on automatic path finding. In this paper, we show ...
the task of understanding and interpreting their internal workings, in the context of adversarial attacks, remains largely unsolved. Gradient-based universal adversarial attacks have been shown to be highly effective on large language models and potentially dangerous due to...
The adversarial example xadv can potentially fool both the white-box model Mw and the black-box model Mb; however, this study primarily focuses on black-box attacks. 2.2. Fast Gradient Sign Method The fast gradient sign method (FGSM), proposed by Goodfellow [26], is an algorithm for ...
Adversarial patch attacks on deep-learning-based face recognition systems using generative adversarial networks. Sensors 2023, 23, 853. [Google Scholar] [CrossRef] Mercha, E.M.; Benbrahim, H. Machine learning and deep learning for sentiment analysis across languages: A survey. Neurocomputing 2023...
优先出版 当期目录 专刊专栏 过刊浏览 亮点文章 高级检索 大事记 投稿 视频 作者 作者指南 语言编辑服务 投稿说明 版权及许可 同行评审政策 道德声明 联系我们 中文/EN首页 关于 目标及范围 编委会 期刊在线 优先出版 当期目录 专刊专栏 ...
Gradient-Based Adversarial Attacks Against Malware Detection by Instruction ReplacementDeep learning plays a vital role in malware detection. The Malconv is a well-known deep learning-based open source malware detection framework and is trained on raw bytes for malware binary detection. Researchers ...
Although empirical results on the effectiveness of adversarial example generation methods against defense mechanisms are discussed in detail in the literature, an in-depth study of the theoretical properties and the perturbation effectiveness of these adversarial attacks has largely been lacking. In this ...
Adversarial attacksText classificationThe loss-based implementationThe gradient-based implementationAdversarial examples are generated by adding infinitesimal perturbations to legitimate inputs so that incorrect predictions can be induced into deep learning models. They have received increasing attention recently ...