https://2024.naacl.org/program/accepted_papers_industry/2024.naacl.org/program/accepted_papers_industry/ 本文主要整理了谣言/假新闻检测、事实核查、AI生成文本检测方向论文,共计9篇,并尽可能找到了对应的预印版本。大语言模型毫无疑问成为了主角。前期已有若干工作尝试将大语言模型用于虚假新闻检测和事实验证...
https://2022.naacl.org/program/accepted_papers/ 本文对ACL/NAACL'22推荐系统相关论文进行梳理,有新任务新数据集,刷榜必备;有搞笑版:一石二鸟统一召回和排序模型;有多任务学习,有利用正负反馈信息去噪,有会话推荐和预训练语言模型。由于论文数量较少且方向分散,分类标注在单篇中,不再整体汇总。 1. 系列导读 20...
some of the presentations at the conference will be of papers accepted by the Transactions of the ACL (TACL) and the Computational Linguistics (CL) journals. Submission TopicsPermalink NAACL 2024 aims to have a broad technical program. Relevant topics for the conference include, but are not limit...
包括Findings. Accepted Papers AIT-QA: Question Answering Dataset over Complex Tables in the Airline Industry Yannis Katsis, Saneem Ahmed Chemmengath, vishwajeet kumar, Samarth Bharadwaj, MUSTAFA CA…
roomylee / nlp-papers-with-arxiv Star 430 Code Issues Pull requests Statistics and accepted paper list of NLP conferences with arXiv link nlp naacl natural-language-processing acl arxiv computational-linguistics emnlp emnlp2019 acl2020 emnlp2020 Updated Jul 24, 2021 Jupyter Notebook xcfcode...
This work is accepted by NAACL 2022 (video). TLDR This repo provides a tagger that labels the related work sections of NLP papers in the following way: Discourse tagging: Each sentence is labeled with one of single document summarization, multi document summarization, narrative with citation, tra...
论文完整清单见 Accepted Papers2024.naacl.org/program/accepted_papers/ https://2024.naacl.org/...
(Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.421", pages = "7595--7628", } @inproceedings{li-etal-2024-superfiltering, title = "Super...
2023年12月ARR:https://stats.aclrollingreview.org/iterations/2023/december/ 查了一下往年ACL/NAACL...
[2024/03] Our paper has been accepted to theNAACL 2024main conference! [2024/02] We released theSuperfiltering, which reveals the strong consistency between small and large LLMs in perceiving and evaluating the difficulty of instruction tuning data and utilizes a small LM, e.g., GPT-2 (124...