Convergence of the AlgorithmThe purpose of this chapter is to prove the convergence of the algorithm. It is shown that the evolution of the maximum distance among the trial points and the local trial points tend to zero. For...doi:10.1007/978-3-030-68517-1_3Andrei, Neculai...
Two convergence aspects of the EM algorithm are studied: (i) does the EM algorithm find a local maximum or a stationary value of the (incomplete-data) likelihood function? (ii) does the sequence of parameter estimates generated by EM converge? Several convergence results are obtained under condi...
The EM algorithm is a popular iterative algorithm for finding maximum likelihood estimates from incomplete data. However, the drawback of the EM algorithm is to converge slowly when the proportion of missing data is large. In order to speed up the convergence of the EM algorithm, we propose th...
As we said, there is no guarantee that the EM algorithm converges to a global maximum of the likelihood. If we suspect that the likelihood may have multiple local minima, we should use themultiple starts approach. In other words, we should run the EM algorithm several times with different s...
Approximations for the power levels at the output of an adaptive array that uses the diagonally loaded sample matrix inversion (SMI) algorithm are derived. Diagonal loading is a technique where the diagonal of the covariance matrix is augmented with a positive or negative constant prior to inversion...
We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive...
A major result is that the best convergence bounds that we obtain for the expected values in the randomized algorithm are as good as the best for the deterministic, but more costly algorithms of Gauss-Southwell type. Numerical experiments illustrate the convergence of the method and the bounds ...
In this paper, we propose two general forms of nonmonotone line searches and study the global convergence of self-scale BFGS algorithm with these two nonmonotone line search meth- ods. We prove that, under some weaker condition than that in the literature, the algorithm is ...
The subgradient method is used frequently to optimize dual functions in Lagrangian relaxation for separable integer programming problems. In the method, al... X Zhao,PB Luh,J Wang - 《Journal of Optimization Theory & Applications》 被引量: 370发表: 1999年 On the Surrogate Gradient Algorithm for...
The stochastic gradient (SG) algorithm has less of a computational burden than the least squares algorithms, but it can not track time-varying parameters and has a poor convergence rate. In order to improve the tracking properties of the SG algorithm, the forgetting gradient (FG) algorithm is ...