We have presented a novel class of distributed continuous-time coordination algorithms that solve network optimization problems where the objective function is strictly convex and equal to a sum of local agent cost functions. For strongly connected and weight-balanced agent interactions, we have shown ...
Besides, a comparison of algorithms is given to highlight the superiority of our algorithm. Conclusion In this paper, we have investigated distributed online NGs over unbalanced digraphs, where the players are subject to heterogeneous convex set constraints. To seek the NE sequences of the online ...
Logarithmic Regret Algorithms for Online Convex Optimization In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixe... E Hazan,A Agarwal,S Kale - 《Machine Learning》 被引量: 938发表: 2007...
review of finite-time consensus algorithmsthe proposed finite-time consensus algorithm distributed MPC of constrained linear systems with time-varying terminal sets 具有时变终端集的约束线性系统的分布式MPC introduction preliminaries main results computation of P and Kdistributed coststhe decoupled terminal se...
In this paper we propose two dual decomposition methods based on smoothing techniques, called here the proximal center method and the interior-point Lagrangian method, to solve distributively separable convex problems. We show that some relevant centrali
His research interests include edge computing, distributed systems, scheduling algorithms, and optimization theory. Bao-Liu Ye received his Ph.D. degree in computer science from Nanjing University, Nanjing, in 2004. Currently, he is a full professor at the State Key Laboratory for Novel Software ...
Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible; it unifies several existing Successive Convex Approximation (SCA)-based algorithms such as (proximal) gradient or Newton type methods, block coordinate (parallel) ...
To facilitate the development of scalable and rapid optimization algorithms, various learning-based approaches have been proposed in the literature [2], [3]. 一些组合优化问题被证明是NP困难的,使得大多数现有的求解器不可扩展。此外,当今数据集规模的不断扩大使得现有的优化方法不足以解决如此大规模的约束...
Specifically, we propose two different algorithms for solving the distributed online AUC maximization problem: (i) the Centralized Distributed One-Pass Online AUC Maximization (C-DOPOAM) algorithm in which the server updates the global model with the gradients computed from workers; and (ii) the ...
Continuous-time gradient flow algorithms have been widely investigated for convex optimization after the pioneer work (Arrow, Huwicz, & Uzawa, 1958), and detailed references can be found in Bhaya and Kaszkurewicz (2006) and Liao, Qi, and Qi (2004). Gradient flow algorithms have been applied...