Matching image-image features (MF-ii):虽然对齐的图像和文本编码器已被证明在视觉语言任务上表现良好,但最近的研究表明,VLM 的行为可能类似于词袋,因此对于优化跨模态相似性可能不可靠。 鉴于此,另一种方法是使用公共文本到图像生成模型h_\xi(例如,Stable Diffusion)并生成与c_{tar }相对应的目标图像h_ψ(c_{...
On Evaluating Adversarial Robustness of Large Vision-Language Models [Project Page] | [Slides] | [arXiv] | [Data Repository] TL, DR: In this research, we evaluate the adversarial robustness of recent large vision-language (generative) models (VLMs), under the most realistic and challenging se...
On Evaluating Adversarial Robustness of Large Vision-Language Models arXiv 2023-05-26 Github - Grounding Language Models to Images for Multimodal Inputs and Outputs ICML 2023-01-31 Github Demo Awesome Datasets Datasets of Pre-Training for Alignment NamePaperTypeModalities COYO-700M COYO-700M: ...
In addition, we establish a one-stop platform for conveniently evaluating adversarial robustness and performing defense on recognition models called AREP-RSIs, which is beneficial for the future research of the remote sensing field.Lu, ZihaoSun, Hao...
Evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models. Due to the difficulty of creating white-box adversarial examples for discrete text input, most analyses of the robustness of NLP models have been done through black-box adversarial examples...
6) CARLINI AND WAGNER ATTACKS (C&W)[1608.04644] Towards Evaluating the Robustness of Neural Networks (arxiv.org) Carlini和Wagner提出了三种对抗攻击的方法,通过限制 \ell_∞ 、\ell_2和\ell_0 范数使得扰动近似无法被察觉,实验证明其对蒸馏防御(defensive distillation)有效, 该算法生成的对抗扰动可以从unse...
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293. PMLR (2018) Google Scholar Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium ...
AgentBench: evaluating LLMs as agents. 2023, arXiv preprint arXiv: 2308.03688 Kang S, Yoon J, Yoo S. Large language models are few-shot testers: exploring LLM-based general bug reproduction. In: Proceedings of the 45th IEEE/ACM International Conference on Software Engineering. 2023, 2312–...
[arXiv:2408.13898]Evaluating Attribute Comprehension in Large Vision-Language Models, Haiwen Zhang, Zixi Yang, Yuanzhi Liu, Xinran Wang, Zheqi He, Kongming Liang, Zhanyu Ma [Paper] [Code] [ECCV 2024]Unlocking Attributes' Contribution to Successful Camouflage: A Combined Textual and VisualAnalysis ...
These studies mainly focused on building open-source libraries for adversarial attacks and defenses and did not provide a comprehensive strategy for evaluating the security of DL models. DEEPSEC provides a unified platform for adversarial robustness analysis of DL models, containing 16 attack methods ...