Multi-domain Task Incremental Learning: Experiments: Motivation: Continual learning可以帮助预训练视觉模型不需要训练地有效泛化到下游任务上,然而Clip的zero-shot能力在灾难性遗忘后有很明显的下降,现在已有的Continual learning方法可以通过replay 之前的数据达到阻止遗忘,但是由于Clip的数据集是私密的,这种方法行不通。
Zero-shot learning is a new paradigm to classify objects from classes that are not available at training time. Zero-shot learning (ZSL) methods have attracted considerable attention in recent years because of their ability to classify unseen/novel class examples. Most of the existing approaches on...
当前,一般认为持续学习 (Continual Learning) 和增量学习(Incremental Learning)、终身学习 (Lifelong Learning) 是等价表述,它们都是在连续的数据流中训练模型,随着时间的推移,更多的数据逐渐可用,同时旧数据可能由于存储限制或隐私保护等原因而逐渐不可用,并且学习任务的类型和数量没有预定义 (例如分类任务中的类别数)。
Zero-shot action recognition requires a strong ability to generalize from pre-training and seen classes to novel unseen classes. Similarly, continual learning aims to develop models that can generalize effectively and learn new tasks without forgetting the ones previously learned. The generalization goals...
1首先,GEM没有利用结构化的任务描述符,而描述符可以被用来获得零镜头学习(zero-shot learning)。 2其次,实验没有研究高级记忆管理(例如构建任务的核心集)。 3第三,每个GEM迭代要求每个任务向后通过一次,这增加了计算时间。当然,如何解决计算时间也是作者自身准备研究的方面。
比如CLIP,zero-shot就能超过现有模型。反观CL方法,还在苦苦追求比base model retraining少降低一些。但也...
Generalized continual zero- shot learning. CoRR, abs/2011.08508, 2020. [7] Stephen Grossberg. Consciousness CLEARS the mind. Neu- ral Networks, 20(9):1040–1053, 2007. [8] David Ha, Andrew M. Dai, and Quoc V. Le. Hypernet- works. In 5th International Conference ...
We find that existing CL methods hardly prevent the for- getting phenomenon for zero-shot transfer ability in con- tinual learning of a pre-trained vision-language model. As shown in Fig. 1 (a), the CL with a pre-trained vision- language mo...
Zero-shot learning (Lampert et al., 2009, Palatucci et al., 2009) and one-shot learning (Fei-Fei et al., 2003, Vinyals et al., 2016) aim at performing well on novel tasks but do not prevent catastrophic forgetting on previously learned tasks. An early attempt to realize lifelong ...
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models - Thunderbeee/ZSCL