Convergence of the RMSProp deep learning method with penalty for nonconvex optimizationA norm version of the RMSProp algorithm with penalty (termed RMSPropW) is introduced into the deep learning framework and it
Because of the existence of specific object intensity distributions, the inverse problem of hologram synthesis in CGH can also be cast as the optimization of a parameterized objective function requiring minimization with respect to its parameters. Since the choice of the objective function is often sto...
In this paper, we formally establish the parameter and function value convergence of proximal BiO-AID in regularized nonconvex and nonsmooth bi-level optimization. Applications of Bi-level Optimization Bi-level optimization has been widely applied to meta-learning (Snell et al., 2017; Franceschi ...
In Section 4, we integrate non-convex regularizations into a general model for SCI reconstruction and develop an optimization algorithm based on ADMM. We provide a convergence algorithm analysis in V. Subsequently, we evaluate the performance of the proposed method in Section 6 and conclude this ...
Accelerate Distributed Stochastic Descent for Nonconvex Optimization with Momentumdoi:10.1109/MLHPCAI4S51975.2020.00011Training,Program processors,Machine learning algorithms,Conferences,Stochastic processes,Switches,AccelerationMomentum method has been used extensively in optimizers for deep learning. Recent studies...
Non-convex optimizationDeep learningStochastic optimizationAdaptive methodsMini-batch algorithmsIn view of a direct and simple improvement of vanilla SGD, this paper presents a fine-tuning of its step-sizes in the mini-batch case. For doing so, one estimates curvature, based on a local quadratic ...
The non-convex tax-aware portfolio optimization problem is traditionally approximated as a convex problem, which compromises the quality of the solution and converges to a local-minima instead of global minima. In this paper, we proposed a non-deterministic meta-heuristic algorithm called Non-linear...