[6] SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and AmitavaDas. A comprehensive survey of hallucination mitigation techniques in large language models.arXiv preprint arXiv:2401.01313, 2024. [7] Xuhui Jiang, Yuxing Tian, Fengrui Hua, Chengjin Xu, Yuanzhuo Wang,...
and providing erroneous legal advice. Small errors could lead to harm, revealing the LLMs' lack of actual comprehension despite advances in self-learning. This paper presents a comprehensive survey of over thirty-two techniques developed to mitigate hallucination in LLMs. Notable among these are Ret...
Hallucination of Multimodal Large Language Models: A Survey This survey presents a comprehensive analysis of the phenomenon of hallucination in multimodal large language models (MLLMs), also known as Large Vision-La... Z Bai,P Wang,T Xiao,... 被引量: 0发表: 2024年 Multilingual Hallucination ...
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models: A compre hensive survey of over thirty-two techniques developed to mitigate hallucination in LLMs [2 Jan 2024] Anthropic Many-shot jailbreaking: simple long-context attack, Bypassing safety guardrails by bombarding ...
A 2022reportcalled "Survey of Hallucination in Natural Language Generation" describes how deep learning-based systems are prone to "hallucinate unintended text," affecting performance in real-world scenarios. The paper's authors mention that the termhallucinationwas first used in 2000 in a paper call...
Finally, Section 9 concludes the survey, offering insights into future directions in this field.2 Taxonomy As the research on the planning ability of LLM-based agents presents a flourishing scene, various methods have been proposed to exploit the upper limit of planning ability. To have a better...
A phenomenological survey of auditory verbal hallucinations in the hypnagogic and hypnopompic statesAuditory verbal hallucinationDialogicHypnagogicHypnopompicPhenomenologyThe phenomenology of auditory verbal hallucinations (AVHs) occurring in hypnagogic and hypnopompic (H&H) states has received little attention....
Survey [2024/02] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey [2024/02] A Survey of Text Watermarking in the Era of Large Language Models [2024/02] Safety of Multimodal Large Language Models on Images and Text [2024/02] A Survey on Hallucination in Large Vis...
错误的对齐:Problematic alignment process could mislead LLMs into hallucination. 生成策略:The generation strategy employed by LLMs has potential risks. 自回归式生成带来幻觉的滚雪球效应。 缓解幻觉 数据端:the curation of pre-training corpora. 对数据进行选择与过滤 ...
Transactions of the Association for Computational Linguistics, 2023, 11: 351–366 186. Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, Bang Y J, Madotto A, Fung P. Survey of hallucination in natural language generation. ACM Computing Surveys, 2023, 55(12): 1–38...