EVA02_CLIP_L_336_psz14_s6B QuanSun/EVA-CLIP siglip-so400m-patch14-384 google/siglip-so400m-patch14-384For LLMs, we support phi-1.5, stablelm-2, qwen1.5-1.8b, minicpm, phi-2, phi-3 and llama3-8b.MODEL_TYPELLMDownload Link phi-1.5 phi-1_5 microsoft/phi-1_5 stablelm-2 sta...
EVA2.0: Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training, by Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao et al. DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning, by Praveen Venkateswaran, ...
2 [15] Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representa- tion learning at scale. In IEEE CVPR, 2023. 2 [16] Guang Feng, Zhiwei Hu, Lihe Zhang,...
Spectral Spine.blend Sunflower.blend Wings of Love by Ely.blend Wolfpack_Backbling byEly.blend Air Head.blend All ghoul trouper styles.blend Aquanman 1st style by Ely.blend Aquanman by Ely.blend arctica model.blend ArcticAssassin.blend Astra.blend Athleisure Assassin by HT28.blend Aura.blend ...
First, dual-tower multi-modal models such as CLIP often rely on large-scale self-supervised learning on rich image and text data, which is primarily based on large-scale contrastive learning and lacks direct generative supervision. This can result in the models lacking expertise or fine-grained ...
* 作者: Zhanghao Chen,Yifei Sun,Wenjian Qin,Ruiquan Ge,Cheng Pan,Wenming Deng,Zhou Liu,Wenwen Min,Ahmed Elazab,Xiang Wan,Changmiao Wang* 其他: 5 pages, 2 figures* 题目: Self-supervised OCT Image Denoising with Slice-to-Slice Registration and Reconstruction* PDF: arxiv.org/abs/2311.1516*...
@article{zhao2023antgpt,title={AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?},author={Qi Zhao and Shijie Wang and Ce Zhang and Changcheng Fu and Minh Quan Do and Nakul Agarwal and Kwonjoon Lee and Chen Sun},journal={ICLR},year={2024}} ...
byMingda Chen, Jingfei Du, Ramakanth Pasunuru, Todor Mihaylov, Srini Iyer, Veselin Stoyanov and Zornitsa Kozareva This paper proposes to use self-supervision (MLM, NSP, CL, etc.) between pre-training and downstream usage to teach the LM to perform in-context learning. Analysis reveals that...
byBoxi Cao, Hongyu Lin, Xianpei Han and Le Sun A Survey of Large Language Models, byWayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang et al. Survey of Hallucination in Natural Language Generation, ...