Adversarial Boot Camp: label free certified robustness in one epochRyan CampbellChris FinlayAdam M Oberman
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in differential privacy, to establish a new connection between diffe...
Probact: A prob- abilistic activation function for deep neural networks. arXiv preprint arXiv:1905.10761, 2019. 5 [30] Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. In H. 7300 Wallach, H. Larochelle, A. B...
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certif...