In this paper, we propose a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with sim
Distributed subgradient descentIn this paper, we propose a distributed algorithm, called Directed-Distributed Subgradient Descent (D-DSD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with similar problems under the assumption of undirected networks, i.e...
The proposed algorithm utilizes stochastic gradient descent to update values of generators for dealing with the noise which is incurred during the gradient estimation, and the step-sizes are heterogeneous. Under strictly convex assumption on objective functions, the algorithm can seek the exact optimal ...
While several variations of gradient-descent-based primal methods have been proposed for distributed optimization over directed networks, an extension of dual-ascent methods to directed networks remains a less-explored area. In this paper, we propose a distributed version of the Alternating Direction ...
The first step is a consensus-based scheme which is in combination with the gradient descent method. Employing the Lyapunov theory and LaSalle's invariance principle, the convergence to the Optimal Solution (OS) is analyzed. Moreover, inspired by the average consensus, i...
We present a novel discrete-time Accelerated Distributed Nesterov gradient descent algorithm for directed networks, called ADND. This algorithm incorporates gradient tracking to enhance convergence rate and accuracy. Utilizing row and column stochastic matrices simultaneously effectively broadens its scope of ...
We show that, by incorporating the recently introduced dynamic regressor extension and mixing procedure to a classical gradient-descent algorithm, improved convergence properties can be achieved. In particular, it is shown that with this modification sufficient conditions for global convergence of all the...
Then, the novel push-sum based constrained optimization algorithm (PSCOA) is developed, where the new gradient descent-like method is applied to settle the involved closed convex set constraints. Furthermore, the rigorous convergence analysis is shown under some standard and common assumptions and ...