A Schur--Parlett Algorithm for Computing Matrix Functions here include a convergence test that avoids premature termination of the Taylor series evaluation and an algorithm for reordering and blocking the Schur form... PI Davies,NJ Higham - 《Siam Journal on Matrix Analysis & Applications》 被引...
We study the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function f on R(n) that is continuously differentiable on an open dense subset. We strengthen the existing convergence results for this algorithm and introduce a slightly revised version for whi...
Let $\mathcal{S}_0 ,\mathcal{S}_1 , \cdots ,\mathcal{S}_k $ be all subspaces of a Hilbert space, and let $\mathcal{S} = \mathcal{S}_1 \oplus , \cdots , \oplus \mathcal{S}_k $. An algorithm is investigated for finding members of $\mathcal{S}_0 $ and $\math...
A dual algorithm based on the smooth function proposed by Polyak (1988) is constructed for solving nonlinear programming problems with inequality constraints. It generates a sequence of points converging locally to a Kuhn-Tucker point by solving an unconstrained minimizer of a smooth potential function...
It is desirable that an algorithm in unconstrained optimization converges when the guessed initial position is anywhere in a large region containing a minimum point. Furthermore, it is useful to have a measure of the rate of convergence which can easily be computed at every point along a traject...
Strong convergence of an new iterative method for a zero of accretive operator and nonexpansive mapping Let E be a Banach space and A an m-accretive operator with a zero. Consider the iterative method that generates the sequence { x n } by the algorithm ... W Meng,C Hu - 《Fixed Point...
The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain difficult problems, including
A variant of the generalized- α scheme is proposed for constrained mechanical systems represented by index-3 DAEs. Based on the analogy with linear multistep methods, an elegant convergence analysis is developed for this algorithm. Second-order convergence is demonstrated both for the generalized co...
A key step of Lagrangian relaxation is to optimize the dual function, and the subgradient method is frequently used when the dual function is nondifferenti... X Zhao,Luh,B P. - IEEE Conference on Decision & Control 被引量: 0发表: 1997年 Surrogate Gradient Algorithm for Lagrangian Relaxation...
The stochastic gradient (SG) algorithm has less of a computational burden than the least squares algorithms, but it can not track time-varying parameters and has a poor convergence rate. In order to improve the tracking properties of the SG algorithm, the forgetting gradient (FG) algorithm is ...