Semi-Supervised LearningSupervised training of multi-layer perceptrons (MLP) with only few labeled examples is prone to overfitting. Pretraining an MLP with unlabeled samples of the input distribution may achieve better generalization. Usually, pretraining is done in a layer-wise, greedy fashion ...
The classification stage is composed of a MLP classifier followed by a Hidden Markov Model (HMM) , providing a good trade-off solution between complexity... Pablo Cabanas Molero,NR Reyes,PV Candeas,... - 《Multimedia Tools & Applications》 被引量: 15发表: 2011年 The Ease of Language Under...
In the model validation, the performance of the Stacking model is compared with several traditional models including the Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and Random Forests (RF) in the multi classification experiments. The prediction results show that Stacking model achieves...
A commonly encountered problem in MLP (multi-layer perceptron) classification problems is related to the prior probabilities of the individual classes - if... S Lawrence,I Burns,A Back,... - Springer-Verlag 被引量: 128发表: 1998年 IVOA Recommendation: Sky Event Reporting Metadata Version 2.0...
Two-phase Inflow Performance Relationship Prediction Using Two Artificial Intelligence Techniques: Multi-layer Perceptron Versus Genetic Programmingartificial intelligencegenetic programminginflow performance relationshipmultilayer perceptronAgeneticprogrammingmodelhasbeencomparedwithmulti-layerperceptron(MLP)andempirical...
Semi-Supervised LearningSupervised training of multi-layer perceptrons (MLP) with only few labeled examples is prone to overfitting. Pretraining an MLP with unlabeled samples of the input distribution may achieve better generalization. Usually, pretraining is done in a layer-wise, greedy fashion ...
The Comparison of Single-Layer and Two-Layer MLP Neural Networks with the LM Learning Method and ANFIS Network in Determining the Stability Factor of Earth Damsdoi:10.22111/JHE.2020.5309H. R. BabaaliM. Heidari ChegeniP. BeiranvandUniversity of Sistan and Baluchestan...
The model was compared with Single-Layer Long Short-Term (SL-LSTM), Multilayer Perceptron (MLP), and Complete Ensemble Empirical Mode Decomposition with Adaptive Noise鈥揑mproved Firefly Algorithm Long Short-Term Memory. Based on the evaluation metrics Mean Square Error (MSE), Root Mean Square ...
On the UNSW-NB15 dataset and CICIDS2017 dataset, the number of neural units in the output layer of CNN and MLP models is 9 and 14 respectively, that is, the number of abnormal sample types. Other parameters are the same as those of the binary classification. The RF model also uses ...
It combines the J48, multilayer perceptron (MLP), and logistic classifiers using the mean method, and the average recognition rate of the model is 97%. Lee et al. [35] proposed a hybrid expert model based on a smart device to recognize human activity that had a recognition accuracy of up...