An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
From MaRDI portal
Publication:6182323
DOI10.1007/s10957-023-02351-9arXiv1711.03669OpenAlexW2883470300MaRDI QIDQ6182323
Renbo Zhao, William B. Haskell, Le Thi Khanh Hien
Publication date: 25 January 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.03669
stochastic optimizationconvex optimization with functional constraintsinexact primal-dual smoothingnon-bilinear saddle point problems
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonsmooth analysis (49J52)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Smooth minimization of non-smooth functions
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Gradient methods for minimizing composite functions
- Subgradient methods for saddle-point problems
- A simplified view of first order methods for optimization
- Accelerated schemes for a class of variational inequalities
- An optimal randomized incremental gradient method
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Affine-invariant contracting-point methods for convex optimization
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
- ESSENTIAL SMOOTHNESS, ESSENTIAL STRICT CONVEXITY, AND LEGENDRE FUNCTIONS IN BANACH SPACES
- A Level-Set Method for Convex Optimization with a Feasible Solution Path
- An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems
- First-Order Methods in Optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Contracting Proximal Methods for Smooth Convex Optimization
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems