including foundation models, domain adaptation, meta-learning, test-time adaptation, generative models, reinforcement learning and federated learning. 通过这样做,我们对遗忘进行了全面的检查,涵盖了更广泛的背景和应用。 在本次调查中,我们根据具体的应用场景,将机器学习中的遗忘分为两类:有害遗忘和有益遗忘。
本文回顾了生物学学习动机所激发的计算方法,其中包括critical developmental stages和curriculum learning,为学习新任务做知识重用的transfer learning,还有由内在动机和自我监督驱动为环境自主探索的reinforcement learning,以及用于跨模式终身学习的多感官系统。 如图是开发自主智体在复杂环境长时间学习的主要部件图:Developmental ...
Continual Learning for Domain Adaptation in Chest X-ray ClassificationAxel SaalbachHeinrich SchulzMatthias Lenga
1. Introduction Test-time domain adaptation aims to adapt a source pre- trained model by learning from the unlabeled test (target) data during inference time. Due to the domain shift be- tween source training data and target test data, an adapta- tion is necessary to achieve good performance...
Domain-Incremental Learning PLMs-based DIL Traditional Methods Overcoming Catastrophic Forgetting During Domain Adaptation of Seq2seq Language Generation [paper] Learning to Solve NLP Tasks in an Incremental Number of Languages [paper] Toward Continual Learning for Conversational Agents [paper] ...
CBA: Improving Online Continual Learning via Continual Bias Adaptor Quanziang Wang1 Renzhen Wang1* Yichen Wu2 Xixi Jia3 Deyu Meng1,4* 1 Xi'an Jiaotong University 2 City University of Hong Kong 3 Xidian University 4 Macau University of Science and Technology quan...
Code Edit No code implementations yet. Submit your code now Tasks Edit AutoML Continual Learning Domain Adaptation Neural Architecture Search Datasets Edit Office-Home DomainNet Office-31 PACS Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges ...
Continual Learning learn non-stationary data distribution without forgetting previous knowledge data-distribution shift during training Foundation Model unsupervised learning on large-scale unlabeled data data-distribution shift in pre-training, fine-tuning Domain Adaptation adapt to target domain while maintaini...
持续学习(Continual Learning, CL): 研究如何在学习新任务时避免遗忘旧任务的知识。 方法包括正则化方法、架构方法、复习方法等。 论文中特别提到了类增量学习(class-incremental learning)的设置。视觉-语言模型(Vision-Language Models, VLMs)的微调: 预训练的VLMs(如CLIP)在下游任务上的应用。 微调方法,包括特征适...
Evolving Standardization for Continual Domain Generalization over Temporal Drift Mixue Xie,Shuang Li,Longhui Yuan,Chi Liu,Zehui Dai Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation Yilin Lyu,Liyuan Wang,Xingxing Zhang,Zicheng Sun,Hang Su,Jun Zhu,Liping Ji...