「ICLR 2018」当年4月30日至5月3日在加拿大温哥华会议中心顺利举办,该年会议的论文接受同样是分为四个模块,具体情况如下:poster-paper共313篇,oral-paper(上台演讲)共23篇,Workshop共89篇,共计接受425篇文章,被拒论文(reject-paper)共计486篇,接受率为:34.36%。
1. RGT | Recursive Generalization Transformer for Image Super-Resolution(上交 Yulun Zhang(是的,你没看错,Yulun大佬将于2024年春入职上交)、孔令和团队,USYD Jinjin Gu et al.) Paper: OpenReview, arXiv Code: github.com/zhengchen199 Abstract: Transformer架构在图像超分辨率(SR)方面表现出了卓越的性能。由...
论文:Protein Discovery with Discrete Walk-Jump Sampling 论文地址:https://openreview.net/forum?id=zMPHKOmQNb机构:基因泰克、纽约大学作者:Nathan C. Frey、Dan Berenberg、Karina Zadorozhny、Joseph Kleinhenz、Julien Lafrance-Vanasse、Isidro Hotzel、Yan Wu、Stephen Ra、Richard Bonneau、Kyunghyun Cho、And...
ICLR 2024 • statistic • paperlist 7304min: 1.00, max: 9.00avg: 5.11, std: 1.26 2260 (30.94%)min: 3.60, max: 9.00avg: 6.44, std: 0.70 1807 (24.74%)min: 3.60, max: 8.00avg: 6.25, std: 0.59 367 (5.02%)min: 5.40, max: 8.50avg: 7.10, std: 0.57 86 (1.18%)min: 6.00, ...
paper, we formally formulate the neighborhood effect as an interference problem from the perspective of causal inference and introduce a treatment representation to capture the neighborhood effect. On this basis, we propose a novel ideal loss that can be used to deal with selection bias in the ...
©PaperWeekly原创·作者|高世平 单位|中山大学硕士生 研究方向|语言模型偏好对齐 背景 在AI的世界里,大型语言模型(LLMs)凭借强大的参数量和计算能力,已经能够生成与人类偏好高度一致的回答,成为ChatGPT等明星产品的核心。然而,这些“大块头”模型对算力和内存的需求极高,难以在手机、边缘设备等资源受限场景中普及。
Official code for ICLR 2024 paperDo Generated Data Always Help Contrastive Learning?authored byYifei Wang*, Jizhe Zhang*, andYisen Wang. With the rise of generative models, especially diffusion models, the ability to generate realistic images close to the real data distribution has been well recog...
Code for ICLR 2024 paper "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment." [[Paper]] | [Project Page] | [Video Demo] ✨ Please do not hesitate to give a star! ✨ We introduce a novel task within the field of human motion generation, termed dance...
©PaperWeekly 原创 · 作者 |段士童 单位|复旦大学硕士研究生 研究方向 |大语言模型价值观对齐 摘要 近年来,大型语言模型(LLMs)取得了前所未有的突破。然而,LLMs 在日常应用中可能会生成不道德内容,从而引发社会风险。虽然当前研究对特定问题如偏见、毒性等内容进行了广泛研究,但从道德哲学的角度探讨 LLMs 的内...
半年多时间,大概收录了100多篇LLM推理相关的论文(paper with codes),既包括了常用的Attention优化、权重量化、KV Cache优化等技术,也涵盖了一些新方向,比如Early Exit、Long Context/Prompt KV Cache优化、Parallel Decoding/Sampling等。Awesome LLM Inference 0x03 内容片段 整理的内容都放在了GitHub: 这里不打算重复...