Goh, G. A Discussion of ‘adversarial examples are not bugs, they are features’: two examples of useful, non-robust features. Distill 4, e00019.3 (2019). Google Scholar Denzin, N. K. The Research Act: A Theoretical Introduction to Sociological Methods (Routledge, 2017). Heesen, R., ...
when presented real examples using the first batch norm, while simultaneously robustness to adversarial examples arises due to the same changes when employing the second batch norm (d) that the network from (b) underwent (note the similarity...
One reason that the existence of adversarial examples can seem counter-intuitive is that most of us have poor intuitions for high dimensional spaces. We live in three dimensions, so we are not used to small effects in hundreds of dimensions adding up to create a large effect. There is anoth...
AbstractRecently, generative adversarial networks (GAN) have become one of the most popular topics in artificial intelligent field. Its outstanding capability of generating realistic samples not only revived the research of generative model, but also inspired the research of semi-supervised learning and ...
Many adversarial examples are generated by calculating model gradients. Since deep neural networks tend to require only raw input data without handcrafted features and to deploy end-to-end structure, feature selection is not necessary compared to adversarial examples in machine learning. – Black-...
However, we did not find nearly as powerful of a regularizing result from this process, perhaps because these kinds of adversarial examples are not as difficult to solve. One natural question is whether it is better to perturb the input or the hidden layers or both. Here the results are ...
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples Robust Feature-Level Adversaries are Interpretability Tools Random Normalization Aggregation for Adversarial Defense Evolution of Neural Tangent Kernels under Benign and Adversarial Training ...
Adversarial Examples Are Not Bugs, They Are Features. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 125–136. [Google Scholar] Rasheed, B....
Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 3–14. [Google Scholar] Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A ...
It can be hard to stay up-to-date on the published papers in the field of adversarial examples, where we have seen massive growth in the number of papers written each year. I have been somewhat religiously keeping track of these papers for the last few years, and realized it may be ...