论文针对持续学习(Continual Learning, CL)中的挑战,即如何在连续到来的新任务中学习,同时最小化对之前学习任务的遗忘(catastrophic forgetting)。方法提出: 提出了一种名为SEMA(Self-Expansion of pre-trained models with Modularized Adaptation)的方法,它通过模块化适配器自动
In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks. Specifically, O-LoRA learns tasks in different (low-rank) vector subspaces that are ...
DT offers advantages in learning efficiency, distribution shift mitigation, and zero-shot generalization but exacerbates the forgetting problem during supervised parameter updates. We introduce multi-head DT (MH-DT) and low-rank adaptation DT (LoRA-DT) to mitigate DT's forgetting problem. MH-DT ...
In this work, we propose a new continual whole-body organ segmentation model with light-weighted low-rank adaptation (LoRA). We first train and freeze a pyramid vision transformer (PVT) base segmentation model on the initial task, then continually add light-weighted trainable LoRA parameters to ...
Continual Learning learn non-stationary data distribution without forgetting previous knowledge data-distribution shift during training Foundation Model unsupervised learning on large-scale unlabeled data data-distribution shift in pre-training, fine-tuning Domain Adaptation adapt to target domain while maintaini...
Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 3 [19] Zixuan Ke, Bing Liu, and Xingchang Huang. Continual learning of a mixed sequence of similar and dissimilar tasks. NeurIPS, 33, 2020. 1, 3, 5 [20] Diederik P Kingma and Jimmy Ba. Adam...
Hu, E.J., et al.: LoRa: low-rank adaptation of large language models. arXiv preprintarXiv:2106.09685(2021) Ke, Z., Liu, B., Huang, X.: Continual learning of a mixed sequence of similar and dissimilar tasks. In: NeurIPS 33 (2020) ...
Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation [MICCAI 2024] [paper] Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation [TNNLS 2024] [paper] CLMS: Bridging Domain Gaps in Medical Imaging Segm...
A low stable rank means that the units of a network do not provide much diversity; the base deep-learning system loses much more diversity than the Shrink and Perturb and continual backpropagation systems. All results are averaged over 30 runs; the solid lines represent the mean and the shade...
Table 1. Non-IID learning scenarios in Federated Learning, and the strategies that could potentially solve each situation. Strategies that deal with changes in both the input space and the behaviour are placed only in the last column, and not in the previous ones. 3.2. State-of-the-art clas...