Adversarial examples which mislead deep neural networks by adding well-crafted perturbations have become a major threat to classification models. Gradient-based white-box attack algorithms have been widely used to generate adversarial examples. However, most of them are designed for multi-class models,...
In this work, we pose the learning task in extreme classification with large number of tail-labels as learning in the presence of adversarial perturbations. This view motivates a robust optimization framework and equivalence to a corresponding regularized objective. Under the proposed robustness ...