高斯过程(Gaussian process),以下简称 GP[1],是一种广泛应用于机器学习中的概率模型。如果一个过程是 GP 的话,我们一般我们用以下公式来表示: 上述公式中,C 是核函数(kernel function),也叫协方差函数(covariance function),另外 C 有两个超参数需要人工设定或者从数据集中学习、估计。e 代表观测的高斯噪声。以下...
E. Deep Kernel 下面是Deep Kernel的一些东西,内容也很多,这里埋个坑,留着以后填。内容大纲为: Gaussian Process & Inducing Points Deep Gaussian Process Deep Kernel Learning Deep Kernel for Density Esitimation 这次主要是涉及的内容太多,所以讲了一大圈结果发现没有讲到这篇知乎的题目,下次填坑。
最近写了一篇博客 Explanation of NNGP: Neural Network Gaussian Process介绍从一层无限宽网络到深层无限宽网络是如何behave得像一个高斯过程,基于ICLR文章 DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES以及两篇…
In the previous session on Gaussian processes, we introduced the Gaussian process model and the covariance function. In this session we are going to address two challenges of the Gaussian process. Firstly, we look at the computational tractability and secondly we look at extending the nature of t...
Our method, Gaussian Process Spatial Alignment (GPSA), consists of a two-layer Gaussian process: the first layer maps observed samples’ spatial locations onto a CCS, and the second layer maps from the CCS to the observed readouts. Our approach enables complex downstream spatially aware analyses...
This code constructs covariance kernel for the Gaussian process that is equivalent to infinitely wide, fully connected, deep neural networks. To use the code, runrun_experiments.py, which uses NNGP kernel to make full Bayesian prediction on the MNIST dataset. ...
Recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified the correspondence between using these kernels as the covariance function for a GP and performing fully Bayesian prediction with ...
A discussion about the sample distribution density effect, the training set size and the kernel function choice is proposed. The results are compared to those of a Gaussian process and a deep neural network. A focus is made on several deceptive predictions of surrogate models, although the ...
Rethinking Kernel Methods for Node Representation Learning on Graphs Yu Tian, Long Zhao, Xi Peng, Dimitris N. Metaxas NeurIPS 2019 Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks Sitao Luan, Mingde Zhao, Xiao-Wen Chang, Doina Precup NeurIPS 2019 N-Gram Graph: A Sim...
Unlike standard Gaussian processes, our model scales well with the size of the training set due to the avoidance of kernel matrix inversion. Moreover, we present a mixture of DNN-GPs to further improve the regression performance. For the experiments on three representative large datasets, our ...