A multivariate (multiple inputs and multiple outputs) Gaussian process regression (MGPR) modelling approach, which can model multivariate nonlinear processes, is developed in this paper. The developed GPR model considers the Gaussian colored noise, rather than the traditional Gaussian white noise. The...
In this paper, we propose a unified framework which is used not only to introduce a novel multivariate Student -t -t process regression model (MV-TPR) for multi-output prediction, but also to reformulate the multivariate Gaussian process regression (MV-GPR) that overcomes some limitations of ...
关键词: Multivariate Gaussian process Multivariate Student-tprocess Gaussian process regression Student-tprocess regression Multi-output prediction Stock investment strategy Industrial sector Time series prediction DOI: 10.1007/s00521-020-04774-1 年份: 2020 ...
UNCERTAINTY QUANTIFICATION OF MULTIVARIATE GAUSSIAN PROCESS REGRESSION FOR APPROXIMATING MULTIVARIATE COMPUTER CODES TWMS Journal of Applied & Engineering MathematicsAL-TAWEEL, YOUNUS
generalized Gaussian process modelGP regressionGP classificationGGPM frameworkexponential family distributioninference algorithm approximationgeneralized GP modellikelihood functionTaylor approximationWe propose a family of multivariate Gaussian process models for correlated outputs, based on assuming that the likelihood...
Joint segmentation of multivariate time series with hidden process regression for human activity recognition. Neurocomputing 120, 633–644; 10.1016/j.neucom.2013.04.003 (2013). Article Google Scholar Chamroukhi, F. Piecewise regression mixture for simultaneous functional data clustering and optimal ...
Classification and Regression Tree (CART) is a nonparametric method for classification and prediction problems. There is no prerequisite of normality of data distribution in this method. Although this method has not been explored much on foraminiferal data, it is a potentially useful and straightforward...
I am trying to replicate the first experiment about function regression in the paper conditional neural processes, using Keras. I am therefore using the negative log-likelihood of my observations as a loss function. My network outputs two values, the mean and the standard deviat...
1)59. For a brief discussion of global signal regression in this context, see Supplementary Fig. 6). Computing O-Information on the full-size 200-node FC matrix results in positive quantities for both data sets (HCP: Ω = 79.16 nats; MICA: Ω = 46.69 nats), indicating that ...
(6) is mathematically equivalent to the projection pursuit regression model. The net defined in Eq. (6) as well as the one illustrated in Fig. 20 has just one hidden layer with p and p = 2 nodes, respectively. These nets may be generalized to accommodate more than one hidden layer and...