primal-dual subgradient methodtime staged capacity expansion planningnondifferentiable convex programssubgradient optimizationdynamic power generation capacity planning/ C1180 Optimisation techniques C1290 Applications of systems theoryWe model capacity expansion problems as nondifferentiable convex programs. A dual ...
PRIMAL-DUAL SUBGRADIENT METHOD FOR HUGE-SCALE LINEAR CONIC PROBLEMS. In this paper, we develop a primal-dual subgradient method for solving huge-scale linear conic optimization problems. Our main assumption is that the prima... NESTEROV,YU.,SHPIRKO,... - 《Siam Journal on Optimization》 被引...
Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization. In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distribu... D Yuan,DWC Ho,S Xu - 《IEEE Transactions on Cybernetics》...
convergence rate of incremental subgradient algorithms, pp. 223–264. springer, boston (2001) math google scholar nemirovski, a.s.: prox-method with rate of convergence \(o(1/t)\) for variational inequalities with lipschitz continuous monotone operators and smooth convex–concave saddle...
EnglishEspañolDeutschFrançaisItalianoالعربية中文简体PolskiPortuguêsNederlandsNorskΕλληνικήРусскийTürkçeאנגלית 9 RegisterLog in Sign up with one click: Facebook Twitter Google Share on Facebook ...
we formulate the primal-dual hybrid gradient method (also referred to as the Riemannian Chambolle–Pock algorithm, RCPA) for general optimization problems on manifolds involving nonlinear operators. We present an exact and a linearized formulation of this novel method and prove, under suitable assumpti...
We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both...
Decomposition techniques implement the so-called “divide and conquer” in convex optimization problems, being primal and dual decompositions the two classical approaches. Although both solutions achieve the goal of splitting the original program into se
We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both...
The search direction for the master problem (12) is given by the subgradient σ=∑j=1S∇tlϕ(tl,pj), which are simply the lagrange multipliers that corresponds to the non-anticipativity constraints (9e). These are computed by solving the different scenario subproblems [9]. Hence, the ...