训练数据:LLMs lack relevant knowledge or internalize false knowledge. 错误的对齐:Problematic alignment process could mislead LLMs into hallucination. 生成策略:The generation strategy employed by LLMs has potential risks. 自回归式生成带来幻觉的滚雪球效应。 缓解幻觉 数据端:the curation of pre-training ...
今日分享Daily20 大模型幻觉(1)Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models Part一、论文摘要 尽管大型语言模型(LLMs)在众多任务中展现了出色的能力,但它们时常会出现“幻觉”,即生成与用户输入、之前生成的上下文或世界已知事实不符的内容。论文总结了当前关于检测、解释...
A Survey on Hallucination in Large Vision-Language Models Recent development of Large Vision-Language Models (LVLMs) has attracted growing attention within the AI landscape for its practical implementation potenti... H Liu,W Xue,Y Chen,... 被引量: 0发表: 2024年 Towards trustworthy LLMs: a ...
we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and ...
A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232, 2023. Huang et al. [2023b] Xu Huang, Jianxun Lian, et al. Recommender ai agent: Integrating large language models for interactive recommendations. arXi...
Survey [2024/02] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey [2024/02] A Survey of Text Watermarking in the Era of Large Language Models [2024/02] Safety of Multimodal Large Language Models on Images and Text [2024/02] A Survey on Hallucination in Large Vis...
Hallucinations: [cnt]: A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [9 Nov 2023] Hallucination Leaderboard: Evaluate how often an LLM introduces hallucinations when summarizing a document. [Nov 2023] OpenAI Weak-to-strong generalization: In...
Wang, N., Tao, D., Gao, X., Li, X., Li, J.: A comprehensive survey to face hallu- cination. Int'l J. Computer Vision 106(1) (2014) 9-30Nannan Wang, Dacheng Tao, Xinbo Gao, Xuelong Li, and Jie Li. 2014. A Comprehensive Survey to Face Hallucination. IJCV 1 (2014), 9-...
Related Survey [1]Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and S Yu Philip. Multimodal large language models: A survey. In 2023 IEEE International Conference on Big Data (BigData), pages 2247–2256. IEEE, 2023. [2]Shezheng Song, Xiaopeng Li, and Shasha Li. How to bridge ...
[Zhang等人,2023b] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the AI ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219, 2023. ...