Aucejo [31] introduced an iteratively reweighted least-squares (IRLS) algorithm to solve the multi-parameter multiplicative ℓp regularization model of IFI in frequency domain. Qiao [32] adopted an iteratively reweighted ℓ1-norm (IRL1) algorithm to tackle an additive ℓp regularization ...
Conventional Vector Autoregressive (VAR) modelling methods applied to high dimensional neural time series data result in noisy solutions that are dense or have a large number of spurious coefficients. This reduces the speed and accuracy of auxiliary comp
The PSO algorithm and the recursive least squares estimator (RLSE) are combined in a hybrid manner to update the free parameters of the model. The PSO is used to update the antecedent parameters of the proposed predictor, and the RLSE is used to adjust the consequent parameters. Azad et al...
A sparse signal with support can be easily resolved using least squares optimization [69,70]. Show moreView chapterExplore book Advancements in Bayesian Methods and Implementation Geethu Joseph, ... Sai Subramanyam Thoota, in Handbook of Statistics, 2022 1.1 Quick summary of existing methods for ...
The K-SVD algorithm is inspired from the k-means clustering algorithm, which is also an NP-hard problem. The aim of k-means clustering is to partition all the signals into K clusters, in which each training signal belongs to the cluster with the nearest mean. It employs an iterative appro...
This latter approach could not, however, be directly applied in the sPLS-DA algorithm due to its iterative nature. Figure 3 illustrates the stability frequencies for the first two dimensions of the sPLS-DA for the GCM and SNP data sets using bootstrap sampling (i.e. of size n). The ...
Our experimental results show that the proposed SSIM-based sparse representation algorithm achieves better SSIM performance and better visual quality than the corresponding least square-based method. 1 Introduction In many signal processing problems, mean squared error (MSE) has been the preferred choice...
(training) set, respectively, which are better than those of BM-SCCA. These results indicate that the search space of BM-SCCA could be too large such that the algorithm could converge to local optima without prior knowledge, while the regularizations of the restrictions on outcome-relevant ...
Orthogonal Least Square NP: Non-deterministic Polynomial MSE: Mean Squared Error MEP: Maximum Entropy Principle OMP: Orthogonal Matching Pursuit LARS: Least Angle Regression SP: Subspace Pursuit BCS: Bayesian Compressive Sensing DoE: Design of Experiment LOO: Leave-One-Out PDF: Probabil...
Joint channel estimation and data detection in MIMO-OFDM systems: A sparse Bayesian learning approach IEEE Trans. Signal Process. (2015) PanayirciE. et al. Sparse channel estimation and equalization for OFDM-based underwater cooperative systems with amplify-and-forward relaying IEEE Trans. Signal ...