Optimal primal-dual methods for a class of saddle point problems
From MaRDI portal
Publication:5245366
DOI10.1137/130919362zbMATH Open1329.90090arXiv1309.5548OpenAlexW2092851214MaRDI QIDQ5245366FDOQ5245366
Yuyuan Ouyang, Yunmei Chen, Guanghui Lan
Publication date: 8 April 2015
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPP). The basic idea of this algorithm is to incorporate a multi-step acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence as Nesterov's smoothing technique. Our stochastic APD method exhibits an optimal rate of convergence for stochastic SPP not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters. To the best of our knowledge, this is the first time that such an optimal algorithm has been developed for stochastic SPP in the literature. Furthermore, for both deterministic and stochastic SPP, the developed APD algorithms can deal with the situation when the feasible region is unbounded, as long as a saddle point exists. In the unbounded case, we incorporate the modified termination criterion introduced by Monteiro and Svaiter in solving SPP problem posed as monotone inclusion, and demonstrate that the rate of convergence of the APD method depends on the distance from the initial point to the set of optimal solutions.
Full work available at URL: https://arxiv.org/abs/1309.5548
Recommendations
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- A primal-dual prediction-correction algorithm for saddle point optimization
- A primal-dual algorithm with line search for general convex-concave saddle point problems
- Solving saddle point problems: a landscape of primal-dual algorithm with larger stepsizes
- A generalized primal-dual algorithm with improved convergence condition for saddle point problems
Cited In (87)
- A relaxed parameter condition for the primal-dual hybrid gradient method for saddle-point problem
- Optimal subgradient methods: computational properties for large-scale linear inverse problems
- The saddle point problem of polynomials
- Algorithms for stochastic optimization with function or expectation constraints
- Modified algorithms for image inpainting in Fourier transform domain
- Acceleration of primal-dual methods by preconditioning and simple subproblem procedures
- Single image blind deblurring based on the fractional-order differential
- Communication-efficient algorithms for decentralized and stochastic optimization
- A three-stage approach for segmenting degraded color images: smoothing, lifting and thresholding (SLaT)
- A double extrapolation primal-dual algorithm for saddle point problems
- Local saddle points for unconstrained polynomial optimization
- A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization
- Stochastic accelerated alternating direction method of multipliers with importance sampling
- An Inverse-Adjusted Best Response Algorithm for Nash Equilibria
- Saddle points of rational functions
- A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
- Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization
- Point process estimation with Mirror Prox algorithms
- Dynamic stochastic approximation for multi-stage stochastic optimization
- On the linear convergence of the general first order primal-dual algorithm
- An optimal randomized incremental gradient method
- A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems
- Stochastic primal dual fixed point method for composite optimization
- A primal-dual prediction-correction algorithm for saddle point optimization
- A new algorithm framework for image inpainting in transform domain
- Block-proximal methods with spatially adapted acceleration
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Fast bundle-level methods for unconstrained and ball-constrained convex optimization
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems
- A prediction-correction-based primal-dual hybrid gradient method for linearly constrained convex minimization
- Efficient Solvers for Saddle Point Problems with Applications to PDE–Constrained Optimization
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- A Distributed ADMM-like Method for Resource Sharing over Time-Varying Networks
- Easily Parallelizable and Distributable Class of Algorithms for Structured Sparsity, with Optimal Acceleration
- Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\)
- Accelerated schemes for a class of variational inequalities
- An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems
- An \(\mathcal O(1/{k})\) convergence rate for the variable stepsize Bregman operator splitting algorithm
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- An introduction to continuous optimization for imaging
- Conditional gradient sliding for convex optimization
- Reducing spatially varying out-of-focus blur from natural image
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Acceleration of the PDHGM on partially strongly convex functions
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Accelerated Non-Overlapping Domain Decomposition Method for Total Variation Minimization
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
- Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs
- Primal-Dual First-Order Methods for Affinely Constrained Multi-block Saddle Point Problems
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- An Accelerated Linearized Alternating Direction Method of Multipliers
- High-performance statistical computing in the computing environments of the 2020s
- Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems
- Accelerated gradient sliding for structured convex optimization
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- On dynamical system modeling of learned primal-dual with a linear operator \(\mathcal{K}\): stability and convergence properties
- A new algorithm for image inpainting in Fourier transform domain
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- General procedure to provide high-probability guarantees for stochastic saddle point problems
- Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates
- A new prediction-correction primal-dual hybrid gradient algorithm for solving convex minimization problems with Linear constraints
- A proximal augmented Lagrangian method for linearly constrained nonconvex composite optimization problems
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- Optimality conditions and numerical algorithms for a class of linearly constrained minimax optimization problems
- Fast numerical methods for image segmentation models
- Semi-proximal point method for nonsmooth convex-concave minimax optimization
- A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems
- Stochastic Saddle Point Problems with Decision-Dependent Distributions
- Adaptively weighted difference model of anisotropic and isotropic total variation for image denoising
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- Distributed and consensus optimization for non-smooth image reconstruction
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Accelerated minimax algorithms flock together
- Solving saddle point problems: a landscape of primal-dual algorithm with larger stepsizes
- Practical acceleration of the Condat-Vũ algorithm
- Fast convergence of the primal-dual dynamical system and corresponding algorithms for a nonsmooth bilinearly coupled saddle point problem
- An alternative extrapolation scheme of PDHGM for saddle point problem with nonlinear function
- Derivative-free alternating projection algorithms for general nonconvex-concave minimax problems
- A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
This page was built for publication: Optimal primal-dual methods for a class of saddle point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5245366)