Black-box optimizationLearning to optimizeMeta-learningRecurrent neural networksConstrained optimizationRecently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free ...
MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning (https://arxiv.org/abs/2310.08252) gmc-drl.github.io/MetaBox/ Resources Readme License BSD-3-Clause license Activity Stars 0 stars Watchers 0 watching Forks 0 forks Report repository Releases ...
A. Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Engineering Optimization 45, 529–555 (2013). Article Google Scholar Eriksson, D., Bindel, D. & Shoemaker, C. A. pySOT: Python surrogate optimization toolbox. https:...
Unser. Monte-carlo sure: A black- box optimization of regularization parameters for general de- noising algorithms. IEEE Transacitons on Image Processing, 17(9):1540–1554, 2008. 3 [36] Jae Woong Soh, Sunwoo Cho, and Nam Ik Cho. Meta- transfer learning for zero-...
Symbolic Discovery of Optimization Algorithms 该研究提出将算法发现表述为程序搜索的方法,并将其应用于发现深度神经网络训练的优化算法,以此推出的新优化器 Lion。在广泛任务中,包括图像分类、视觉-语言对比学习、扩散模型和语言建模的结果表明, Lion 优于主流优化器(如 Adam 和 Adafactor)。例如,在扩散模型上,Lion ...
本周主要论文包括阿里达摩院获 KDD 2022 最佳论文,这是国内企业首次获奖;Meta 发布 110 亿参数模型,击败谷歌 PaLM 等研究。 目录 FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Package for Federated Graph Learning High-Resolution Image Synthesis with Latent Diffusion Models ...
5. 黑盒适应(BlackBox Adaptation) 这里比较关键的地方就是如何让神经网络去学习p(µ|D_train,∂),其中µ是模型的任务先验。 黑盒适应的框架 优点: 有表达力 容易集成 缺点: 遇到复杂的问题容易产生优化问题 缺少数据 6. 基于优化的推断(optimization-based inference) ...
A variety of approaches have been proposed that vary based on how the adaptation portion of the training process is performed. These can broadly be classified into three categories: “black-box” or model-based, metric-based, and optimization-based approaches. ...
优化方法:Bi-level optimization 算法经典模型2:ProtoNet Prototypical networks for few-shot learning 来源于小样本学习——元学习中的有监督应用任务 算法:学习一个更好的映射方法支持小样本分类任务 squared Euclidean distance Bregman divergence:保证可以以mean作为class的中心 ...
Idea: Optimization as a model.预测分类器参数的优化过程 (Ravi and Larochelle, ICLR 2017)[4]. 普通梯度更新 vs LSTM 的单元状态更新: \begin{align} \theta_{t} &=\theta_{t-1}-\alpha_{t} \nabla_{\theta_{t-1}} \mathcal{L}_{t} \\ c_{t} &=f_{t} \odot c_{t-1}+i_{t}...