文章提出一种Language Guidance for Prompt-based Continual Learning(LGCL)的方法针对于prompt-based方法作为plug-in的插件。LGCL是任务不可知的为模型提供task和class上的 language guidance的方法。 Work: Continual learning: 目前的continual learning可分为以下方向: Regularization-based methods:找到针对task的important...
Prompt-based Continual Learning (PCL) has gained considerable attention as a promising continual learning solution because it achieves state-of-the-art performance while preventing privacy violations and memory overhead problems. Nonetheless, existing PCL approaches face significant computational burdens ...
In this work, we reveal that the current prompt-based continual learning strategies fall short of their full potential under the more realistic self-supervised pre-training, which is essential for handling vast quantities of unlabeled data in practice. This is largely due to the difficulty of ...
NLU_NLG Winter Semester question-answeringsquadprompt-based-learningflan-t5 UpdatedFeb 25, 2024 Jupyter Notebook NeurAI-Lab/AGILE Star0 AGILE lifelong-learningcontinual-learningprompt-based-learning UpdatedMay 27, 2024 Python Curate this topic
We eliminate the initial forward pass of prompt-based continual learning methods that doubles training and inference time. Moreover, we propose a topic-aware prompt pool that employs neural topic embeddings as fixed keys. This strategy ensures diverse and effective prompt usage, addressing the ...
3.2 Continual Learning via Soft Prompt 3.3 Convergence Analysis 3.4 Ablation Study ProQA: Structural Prompt-based Pre-training for Unified Question Answeringarxiv.org/abs/2205.04040 0、Abstract 这篇文章被NAACL2022接受,来自中山大学、MSRA、清华大学、香港中文大学和澜舟科技的一项关于QA的工作。 作者发...
Section 2 introduces the background of our study, including software vulnerability assessment and continual learning. Section 3 presents the framework and details of our proposed method SVACL. Section 4 shows the empirical settings of our study, including research questions and design motivation, ...
We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental ...
named entity recognition; prompt learning; pre-trained language model; BERT adapter1. Introduction Named entity recognition (NER) tries to distinguish the entities’ boundary and category labels from unstructured text, which is an important natural language processing task, owing to the application in ...
Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by ...