Convergence of the RMSProp deep learning method with penalty for nonconvex optimizationA norm version of the RMSProp algorithm with penalty (termed RMSPropW) is introduced into the deep learning framework and its convergence is addressed both analytically and numerically. For rigour, we consider the ...
Because of the existence of specific object intensity distributions, the inverse problem of hologram synthesis in CGH can also be cast as the optimization of a parameterized objective function requiring minimization with respect to its parameters. Since the choice of the objective function is often sto...
In Section 4, we integrate non-convex regularizations into a general model for SCI reconstruction and develop an optimization algorithm based on ADMM. We provide a convergence algorithm analysis in V. Subsequently, we evaluate the performance of the proposed method in Section 6 and conclude this ...
Bi-level optimization has become an important and popular optimization framework that covers a variety of emerging machine learning applications, e.g., meta-learning (Franceschi et al.,2018; Bertinetto et la.,2018; Rajeswaran et al.,2019; Ji et al.,2020), hyperparameter optimization (Frances...
Non-convex optimizationDeep learningStochastic optimizationAdaptive methodsMini-batch algorithmsIn view of a direct and simple improvement of vanilla SGD, this paper presents a fine-tuning of its step-sizes in the mini-batch case. For doing so, one estimates curvature, based on a local quadratic ...
for hologram optimization are mainly conducted by parallel computation based on the Fourier transform73. Among them, using a single fast Fourier Transform (FFT) to compute Fraunhofer diffraction at infinity is one of the mostly used propagation strategies74, which is easy and simple enough to ...
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in machine learning. However, the question of how to effectively select the step-sizes in stochastic gradient descent methods is challenging, and can greatly influence the performance of stochastic gradient...
For the optimization of transmission maps, which can be performed by the method of guided filtering [25,26], variation is a more ideal choice, because it is not only sensitive to the structure and texture of the image, but can also reconstruct the image. Hou et al. designed nonlocal ...