No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
From MaRDI portal
Publication:6126650
DOI10.1007/s10107-023-01976-yarXiv2111.11309MaRDI QIDQ6126650
Kfir Y. Levy, Jun-Kun Wang, Jacob Abernethy
Publication date: 9 April 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2111.11309
convex optimizationonline learningzero-sum gameFrank-Wolfe methodno-regret learningmomentum methodsNesterov's accelerated gradient methods
Cites Work
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- An analog of the minimax theorem for vector payoffs
- Convex analysis and nonlinear optimization. Theory and examples.
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Subgradient methods for saddle-point problems
- A modification of the Arrow-Hurwicz method for search of saddle points
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Accelerated schemes for a class of variational inequalities
- An optimal randomized incremental gradient method
- A simple algorithm for a class of nonsmooth convex-concave saddle-point problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Understanding the acceleration phenomenon via high-resolution differential equations
- On lower iteration complexity bounds for the convex concave saddle point problems
- Golden ratio algorithms for variational inequalities
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- First-order and stochastic optimization methods for machine learning
- Exploiting problem structure in optimization under uncertainty via online convex optimization
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Efficient algorithms for online decision problems
- Interior projection-like methods for monotone variational inequalities
- Perturbed Fenchel duality and first-order methods
- Lectures on Modern Convex Optimization
- Conditional Gradient Sliding for Convex Optimization
- WHAT IS...a Fenchel Conjugate?
- Smoothing and First Order Methods: A Unified Framework
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- Strongly convex analysis
- Online Learning and Online Convex Optimization
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- On the Generalization Ability of On-Line Learning Algorithms
- Dual gauge programs, with applications to quadratic programming and the minimum-norm problem
- Rates of Convergence for Conditional Gradient Algorithms Near Singular and Nonsingular Extremals
- Variational Analysis
- A Saddle Point Algorithm for Networked Online Convex Optimization
- An Optimal First Order Method Based on Optimal Quadratic Averaging
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Convergence Rates of Proximal Gradient Methods via the Convex Conjugate
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities
- The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- A variational perspective on accelerated methods in optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings
- Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Gauge Optimization and Duality
- Some methods of speeding up the convergence of iteration methods
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Training GANs with centripetal acceleration
- Introduction to Online Convex Optimization
- New analysis and results for the Frank-Wolfe method
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item