摘要原文 We study the convergence of Optimistic Gradient Descent Ascent inunconstrained bilinear games. In a first part, we consider the zero-sum caseand extend previous results by Daskalakis et al. in 2018, Liang and Stokes in2019, and others: we prove, for any payoff matrix, the exponential...
内容提示: Tight Last-Iterate Convergence of the Extragradient and theOptimistic Gradient Descent-Ascent Algorithm forConstrained Monotone Variational InequalitiesYang Cai * †Yale Universityyang.cai@yale.eduArgyris Oikonomou * †Yale Universityargyris.oikonomou@yale.eduWeiqiang Zheng †Yale University...
We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants ...
We characterize the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Gradient Descent Ascent (OGDA). We show that both dynamics avoid unstable critical points for almost all initializations. Moreover, for small step sizes and under mild assumptions...
In this section, we focus on analyzing the performance of optimistic gradient descent ascent (OGDA) for solving a general smooth convex-concave saddle point problem. It has been shown that the OGDA method recovers the convergence rate of the proximal point for both strongly convex-strongly concav...
the efficient representation of graphical games as well the expressive power of EFGs. We examine the convergence properties ofOptimistic Gradient Ascent(OGA) in these games. We prove that the time-average behavior of such online learning dynamics exhibitsO(1/T) rate of convergence to the set of...