[论文总结] Adversarial Examples Are Not Bugs, They Are Features 说在前面 NIPS 2019,原文链接:arxiv.org/abs/1905.0217; 之前写过一个这篇文章的论文笔记,但是由于当时水平欠缺,还看不太懂其含义,今天补上; 近期因为ECCV Rebuttal等事宜,更新较慢; 本文作于2022年05年29日。 1.
论文总结:核心观点:论文《Adversarial Examples Are Not Bugs, They Are Features》挑战了关于深度神经网络脆弱性的传统观点,提出对抗样本不是异常现象,而是学习过程的固有方面,根源在于模型如何利用数据中的可泛化但非鲁棒特征。非鲁棒特征:作者认为,对抗脆弱性是主流监督学习范式的直接结果,其中模型学会...
title={Adversarial Examples Are Not Bugs, They Are Features}, author={Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Engstrom, Logan and Tran, Brandon and Madry, Aleksander}, pages={125--136}, year={2019}}概作者认为, 标准训练方法, 由于既能学到稳定的特征和不稳定的特征, ...
Adversarial vulnerability is a direct result of our models’ sensitivity to well-generalizing features in the data. 其次,作者强调了一点,分类器为了高的分类精度,会利用所有的鲁棒的和不鲁棒的特征,并不是像人一样只关注那些形状显著非常鲁棒的特征,举个例子;对于分类器来说耳朵和尾巴并不会比其他的特征具有...
【自娱自阅】There Are Many Consistent Explanations Of Unlabeled Data, should average 121 -- 21:30 App 【自娱自阅】Likelihood Landscapes A Unify Principle of many Adversarial Defenses 802 -- 27:08 App 【自娱自阅】Unsupervised Feature Learning via Non-Parametric Instance Discrimination浏览...
The paper "Adversarial Examples Are Not Bugs, They Are Features" from NIPS 2019, arxiv.org/abs/1905.0217... challenges conventional views on the vulnerability of deep neural networks. It proposes that adversarial examples are not anomalies, but rather an inherent aspect of the ...
We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans. After capturing these features within a theoretical ...
Ilyas et al.29 propose that the existence of adversarial examples is due to ANNs exploiting features that are predictive but not causal, and perhaps ANNs are far more sensitive to these features than humans. Kim et al.30 further argued that neural mechanisms in the human visual pathway may ...
《Adversarial Examples Are Not Bugs, They Are Features》A Ilyas, S Santurkar, D Tsipras, L Engstrom, B Tran, A Madry [MIT] (2019) http://t.cn/Eoo1AE4 view:http://t.cn/Eoo1AEL
(2019). Adversarial exam- ples are not bugs, they are features. In Advances in neural information processing systems (pp. 125–136). Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. (2018). Adversarial machine learning. Cam- bridge University Press. Kanamori, K., Takagi...