We regularize Gaussian mixture Bayesian (GMB) classifier in terms of the following two points: 1) class-conditional probability density functions, and 2) complexity as a classifier. For the former, we employ the Bayesian regularization method proposed by Ormoneit and Tresp, which is derived from...
展开 关键词: iterative methods parameter estimation pattern recognition Gaussian components Gaussian mixture model closeness measure cluster analysis density estimation method log likelihood regularization method 会议名称: International Joint Conference on Neural Networks 会议...
Y Li,X Xie - Applied Intelligence: The International Journal of Artificial Intelligence, Neural Networks, and Complex Problem-Solving Technologies 被引量: 0发表: 2023年 Optimized sparse polynomial chaos expansion with entropy regularization Sparse Polynomial Chaos Expansion (PCE) is widely used in vario...
The widespread applications of high-throughput sequencing technology have produced a large number of publicly available gene expression datasets. However, due to the gene expression datasets have the characteristics of small sample size, high dimensional
Fast exact leave-one-out cross-validation of sparse least-squares support vector machines Neural Networks (2004) J. Yuan et al. Adaptive spherical gaussian kernel in sparse bayesian learning framework for nonlinear regression Expert Systems with Applications (2009) J. Yuan et al. Reliable multi-obj...
State-of-the-art research has introduced advanced methodologies such as non-uniform quantization and quantization-aware training, enabling the efficient deployment of neural networks while preserving performance (cf. Jacob et al. [8], Zhuang et al. [9], Hubara et al. [10]). Furthermore, ...
Due to the uncertainty of data streams resulting from noisy data, approaches based on evolving fuzzy systems (EFS) have also been investigated. For example, approaches based on density clustering [46], vector quantization [46], and rule-based approaches [47,48] were proposed to learn from evol...
After introducing noise (uncertainty) parameters 𝜎1σ1 and 𝜎2σ2 for auto-encoder and score-guided networks. The final loss function of the proposed model is: 𝐿(𝑋,𝑋̂,𝑠)=12𝜎21𝐿𝑅𝐸(𝑋,𝑋̂)+12𝜎221𝑛∑𝑛𝑖=1𝐿𝑆𝐸(𝑋𝑖,𝑋𝜄^,𝑠)+...
Finally, several experiments are conducted on two real-world social networks to evaluate the NBPR method, where we can find that our NBPR method has better performance than other related recommendation algorithms. This result also demonstrates the effectiveness of our method with neighborhood ...
Various concrete regression techniques can be adopted for this purpose, such as linear regression, support vector machine (SVM) [40,41], and neural networks (NNs) [42]. Then, MVs are replaced with the conditional expectation of the regression results. Local least squares (LLS) regression is ...