文章讨论了无监督学习会帮助Deep Learning的几个方面,提出了Pre-training as a Regularizer的观点,从实验数据中分析, 并没有理论的基础,这也是Deep Learning的现阶段最被人诟病的,没有完整的理论体系支撑。 4 Learning Deep Architectures for AI Yoshua Bengio在Deep Learning的综述文章,想要大概了解Deep Learning领域...
个人阅读的Deep Learning方向的paper整理,分了几部分吧,但有些部分是有交叉或者内容重叠,也不必纠结于这属于DNN还是CNN之类,个人只是大致分了个类。目前只整理了部分,剩余部分还会持续更新。 一RNN 1 Recurrent neural network based language model RNN用在语言模型上的开山之作 2 Statistical Language Models Based o...
Deep Learning一书应该算是教材性质,整体还是偏基础,并不是跟进最新内容的好地方——arxiv(无论怎么...
蹚入Deep Learning的人越来越多了,直接上手写Image Classification、Speech Recognition甚至搭一个完整的Machine Translation System也不再是一个难事了,但也因为嵌套的非线性结构使得Neural Network框架更像是一个黑盒子,我们该如何解释究竟是什么因素使得有这样的预测结果。 为什么要使得AI System具备可解释性呢? 如在AI...
Deep Learning一书是基础,但不够前沿 Deep Learning一书应该算是教材性质,整体还是偏基础,并不是跟进...
deep learning research ever since. In February 2015 their paper “Human-level control through deep reinforcement learning” was featured on the cover of Nature, one of the most prestigious journals in science. In this paper they applied the same model to 49 different games and achieved superhuman...
六、浅层学习(Shallow Learning)和深度学习(Deep Learning) 浅层学习是机器学习的第一次浪潮。 20世纪80年代末期,用于人工神经网络的反向传播算法(也叫Back Propagation算法或者BP算法)的发明,给机器学习带来了希望,掀起了基于统计模型的机器学习热潮。这个热潮一直持续到今天。人们发现,利用BP算法可以让一个人工神经网络...
Notes:I like the idea of this small and concise research, it answers clear question clearly; I think more research could be done in this direction Human few-shot learning of compositional instructions Authors:Brenden M. Lake, Tal Linzen, Marco Baroni ...
1PAC-Net: A Model Pruning Approach to Inductive Transfer Learning (ICML 2022) 链接:PAC-Net: A Model Pruning Approach to Inductive Transfer Learning 主题:归纳式迁移学习、剪枝 以前的迁移策略采用预训练模型作为初始化,目标任务进行微调的方式进行,本文采用剪枝的方式对模型进行迁移学习,提出了PAC-Net。
GPU Kernels for Block-Sparse Weights [Research at OpenAI] [article] [code] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm [arXiv] Deep Learning Scaling is Predictable, Empirically [arXiv] [article] 2017-11 High-Resolution Image Synthesis and Semantic Manipul...