The least mean square method for estimation in sparse adaptive networks is based on the Reweighted Zero Attracting Least Mean Square (RZA-LMS) algorithm, providing estimation for each node in the adaptive network. The extra penalty term of the RZA-LMS algorithm is then integrated into the ...
The least mean squares (LMS) algorithm is one of the most popular recursive parameter estimation methods. In its standard form it does not take into account any special characteristics that the parameterized model may have. Assuming that such model is sparse in some domain (for example, it has...
The recently proposed ZA-LMS algorithm achieves this by introducing a "zero attractor" term in the update equation that tries to pull the coefficients towards zero, thus accelerating the convergence. For systems whose sparsity level, however, varies over a wide range, from highly sparse to non-...
An algorithm based on least-mean-square error and sparse features is presented for underdetermined blind source separation (BSS), i.e., situation when the number of observed signals' is less than that of sources. In this paper, using the sparsity of sources, first, we estimate the mixing ma...
We further generalize the results to heteroscedastic normal mean models. Specifically, we propose a semiparametric estimator which can be calculated efficiently by combining the familiar EM algorithm with the Pool-Adjacent-Violators algorithm for isotonic regression. The effectiveness of our methods is ...
Conventional Vector Autoregressive (VAR) modelling methods applied to high dimensional neural time series data result in noisy solutions that are dense or have a large number of spurious coefficients. This reduces the speed and accuracy of auxiliary comp
Our simulations show that our proposed algorithm has faster convergence and less final MSE than MAP-LMS, while it is more complex than MAP-LMS. Moreover, some lower bounds for sparse channel estimation is discussed. Specially, a Cramer-Rao bound and a Bayesian Cramer-Rao bound is also ...
Moreover, the algorithm was applied to a scanning configuration which is not practically used (namely the interior problem). In this paper, we extend this method to a more practical protocol, sparse-view and limited-angle image reconstruction. Furthermore, the FBP algorithm is adopted instead of...
The K-SVD algorithm is inspired from the k-means clustering algorithm, which is also an NP-hard problem. The aim of k-means clustering is to partition all the signals into K clusters, in which each training signal belongs to the cluster with the nearest mean. It employs an iterative appro...
Algorithm 1. Sequential Threshold Bayesian Linear Regression 3.4. Error metric and computer implementation For assessing the accuracy of our models, we consider two error metrics. First, the relative ℓ2 norm of the difference between the ground truth and the discovered vector of PDE coefficients:...