Black-box optimizationLearning to optimizeMeta-learningRecurrent neural networksConstrained optimizationRecently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free ...
Recently, Meta-Black-Box Optimization with Reinforcement Learning (MetaBBO-RL) has showcased the power of leveraging RL at the meta-level to mitigate manual fine-tuning of low-level black-box optimizers. However, this field is hindered by the lack of a unified benchmark. To fill this gap, ...
MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning (https://arxiv.org/abs/2310.08252) gmc-drl.github.io/MetaBox/ Resources Readme License BSD-3-Clause license Activity Stars 0 stars Watchers 0 watching Forks 0 forks Report repository Releases ...
Black-box/model-based meta learning,inner optimization被包裹在单个模型的前向过程中,对模型的超参数进行outer optimization,外部和内部最优耦合。 Parameter Initialization,通过sub-space,separating out scale and shift来分离meta-learn子空间需要学习的参数; ***另一个是一个单独的初始值是否对广泛的其他潜在任务...
4.1 Black-Box Adaptation 4.2 Optimization-based inference 4.3 Non-parametric methods / Metric learning 4.4 Bayesian meta-learning 5. Meta-Learning Application 5.1 Few-Shot Image Classification 5.2 Few-Shot Image Segmentation 5.3 Others 本文对元学习做一个介绍, 同时给出一些经典的基于元学习的少样本分类...
StanfordCS330DeepMulti-BlackBoxMetaLearningl2022ILecture4.mp4 01:17:59 StanfordCS330DeepMulti-Optimization-BasedMeta-Learningl2022ILecture5.mp4 01:17:03 StanfordCS330DeepMulti-Non-ParametricFew-ShotLearningl2022ILecture6.mp4 01:17:45 StanfordCS330IUnsupervisedPre-Training_ContrastiveLearningl2022I...
The modern engineering design process often employs computer simulations to evaluate candidate designs, a setup that results in computationally expensive black-box optimization problems. An established framework to solve such problems is with evolutionary metamodel-assisted algorithms, in which the metamodel...
Symbolic Discovery of Optimization Algorithms 该研究提出将算法发现表述为程序搜索的方法,并将其应用于发现深度神经网络训练的优化算法,以此推出的新优化器 Lion。在广泛任务中,包括图像分类、视觉-语言对比学习、扩散模型和语言建模的结果表明, Lion 优于主流优化器(如 Adam 和 Adafactor)。例如,在扩散模型上,Lion ...
Parameter Initialization【3.1 Optimization-based】。这里 w 对应于Inner loop中的初始化参数(例如MAML算法),仅需要一次或多次梯度就可以在少样本学习的条件下获得良好的结果(而不会导致过拟合)。这种情况主要面临两个挑战:1)Outer loop需要处理和Inner loop一样量级的参数,可能需要分离参数子集;2)同一个初始化条件...
Non-stationary optimization in many-shot single-task meta-RL: 在multi-task场景下,inner-loop可以重新访问相同的任务数次,允许meta-learner拟合静止的训练任务分布。然而在single-task场景下,不断变化的策略参数使得meta-learner也面对的是非静止的优化问题,这仍然是一个open problem ...