Adversarial Attacks on Deep Graph Matching, 📝NeurIPS Attacking Graph-Based Classification without Changing Existing Connections, 📝ACSAC Cross Entropy Attack on Deep Graph Infomax, 📝IEEE ISCAS Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation, 📝ICLR, Code Towards More...
A pytorch adversarial library for attack and defense methods on images and graphs machine-learning deep-neural-networks deep-learning defense graph-mining graph-convolutional-networks adversarial-examples adversarial-attacks graph-neural-networks Updated Jul 23, 2024 Python MadryLab / photoguard Star ...
Outcomes of adversarial attacks on deep learning models for ophthalmology imaging domains. JAMA Ophthalmol. 2020;138(11):1213–5. PubMed PubMed Central Google Scholar Mahapatra D, Antony B, Sedai S, Garnavi R. Deformable medical image registration using generative adversarial networks. In: 2018 ...
By performing adversarial attacks on an uncertainty metric, informative geometries that expand the training domain of NNs are sampled. When combined with an active learning loop, this approach bootstraps and improves NN potentials while decreasing the number of calls to the ground truth method. This...
Adversarial Attacks. Adversarial examples demonstrate that deep learning models are more vulnerable to image perturbation. Adversarial perturbations can be classified into two categories: (1) global adversarial perturbations and (2) local adversarial perturbations; both pose threats to deep learning models....
“Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems”. A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack ...
(99%)Chetan Verma; Archit Agarwal Impact of Adversarial Attacks on Deep Learning Model Explainability. (99%)Gazi Nazia Nur; Mohammad Ahnaf Sadat UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models. (99%)Yuning Han; Bingyin Zhao; Rui Chu; Feng Luo; Biplab Sikdar; Yingjie...
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey (Zhang et al., 2020) 2020 Towards a Robust Deep Neural Network in Texts: A Survey (Wang et al., 2019) 2021 Measure and Improve Robustness in NLP Models: A Survey (Wang et al., 2021) 2022 Adversarial a...
VoiceBlock: Privacy through Real-Time Adversarial Attacks with Audio-to-Audio Models A Closer Look at the Adversarial Robustness of Deep Equilibrium Models Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models Your Out-of-Distribution Detection Method is Not Robust! Decision-based Bl...
6. Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning 会议:AAAI 2020. AAAI Technical Track: Machine Learning. 作者:Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, Yang Liu 链接:https://aaai.org/ojs/index.php/AAAI/article/view/6047/5903 ...