In this paper we investigate the convergence of a recently popular class of first-order primal-dual algorithms for saddle point problems under the presence of errors occurring in the proximal maps and gradients.
We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of erro
It is well-known that primal first-order algorithms achieve sublinear (linear) convergence for smooth convex (smooth strongly convex) constrained minimization. However, these methods encounter numerical difficulties when the primal feasible set is complicated, since they require exact projection onto this...
Finally, the significant advantages of the accelerated SA over the existing algorithms are illustrated in the context of solving a class of stochastic programming problems whose feasible region is a simple compact convex set intersected with an affine manifold.;In the third part of this work, we ...
We derive relations between the inner and the outer accuracy of the primal and dual problems and we give a full convergence rate analysis for both gradient and fast gradient algorithms. We provide estimates on the primal and dual suboptimality and on primal feasibility violation of the generated ...
Advanced Signal Processing Algorithms. International Society for Optics and Pho- tonics, vol. 2563, pp. 314–325. SPIE, New York (1995). https://doi.org/10.1117/12.211408 22. Clason, C., Kruse, F., Kunisch, K.: Total variation regularization of multi-material topology opti- mization. ...
We present two approximate versions of the proximal subgradient method for minimizing the sum of two convex functions (not necessarily differentiable). At each iteration, the algorithms require inexact evaluations of the proximal operator, as well as, approximate subgradients of the functions (namely:...
First-order primal-dual algorithmInexactNonergodic convergenceLinear convergenceIn this paper, we study a first-order inexact primal-dual algorithm (I-PDA) for solving a class of convex-concave saddle point problems. The I-PDA, which involves a relative error criterion and generalizes the classical...
to project on the primal constraint set described by a collection of general convex functions, we use the Lagrangian relaxation to handle the complicated constraints and then, we apply dual (fast) gradient algorithms based on inexact dual gradient information for solving the corresponding dual problem...
Our theoretical results consist of new optimization algorithms accompanied with global convergence guarantees to solve a wide class of composite convex optimization problems. When the first objective term is additionally self-concordant, we establish different local convergence results for our method. In ...