Zero-Shot Adversarial Robustness LVM模型中对于新的任务和数据集巨有良好的泛化性,CLIP在zero-shot任务上的能力显著。但是,尚未有人关注过zero-shot的对抗鲁棒性的迁移效果。本文主要针对的就是在zero-shot任务上模型的对抗鲁棒性能力。 Adversarial Training 对抗训练是一种普遍的用于增强模型对抗鲁棒性的方式,其原理...
其并不能反映真正的zero-shot场景。故创建一个明确的新任务基准,以评估广泛的zero-shot迁移能力,而不...
(NeurIPS 2024)Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models adversarial-robustness zero-shot-adversarial-robustness Updated Nov 14, 2024 Python Improve this page Add a description, image, and links to the zero-shot-adversarial-robustness topic page so ...
and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. The experiments validate that our method yields a 9.58% enhancement in zero-shot robust accuracy over the current state-of-the-art techniques across 16 ...
Zero-shot learningRobust generalizationAdversarial robustnessData shift robustness is an active research topic, however, it has been primarily investigated from a fully supervised perspective, and robustness of zero-shot learning (ZSL) models have been largely neglected. In this paper, we present a ...
Through our novel optical illusion dataset, MonoTrap, and extensive evaluation across multiple benchmarks, we demonstrate that our synthetic-only trained model achieves state-of-the-art results in zero-shot generalization, significantly outperforming existing solutions while showing remarkable robustness to...
In the task of semantic segmentation for autonomous driving, it is significant to study the zero-shot adversarial robustness of SAM. Therefore, we deliver a systematic empirical study on the robustness of SAM without additional training. Based on the experimental results, the zero-shot adversarial ...
We address this tension by introducing a simple and effective method for improving robustness while fine-tuning: ensembling the weights of the zero-shot and fine-tuned models (WiSE-FT). Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while ...
[75], fine- tuning comes at the cost of robustness: across several natural distribution shifts, the accuracy of their fine-tuned models is lower than that of the original zero-shot model. This leads to a natural question: Can zero-shot models be fine-tuned without reducing accuracy under ...
1a). To improve the noise robustness of ZS-DeconvNet while maintaining its unsupervised characteristic, we adopted an image recorrupting scheme26 that generates two noise-independent recorrupted images from the original image, which are then used as inputs and GTs in the network training (Methods...