Two convergent primal-dual hybrid gradient type methods for convex programming with linear constraints
From MaRDI portal
Publication:6100752
DOI10.1007/S41980-023-00778-4zbMATH Open1514.90187OpenAlexW4366261617MaRDI QIDQ6100752FDOQ6100752
Authors: M. Sun, Jing Liu, Maoying Tian
Publication date: 22 June 2023
Published in: Bulletin of the Iranian Mathematical Society (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s41980-023-00778-4
Recommendations
- On the convergence of primal-dual hybrid gradient algorithm
- A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science
- Acceleration and global convergence of a first-order primal-dual method for nonconvex problems
- On the convergence of stochastic primal-dual hybrid gradient
- Precompact convergence of the nonconvex primal-dual hybrid gradient algorithm
Cites Work
- Convex Analysis
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- PPA-like contraction methods for convex optimization: a framework using variational inequality approach
- A customized proximal point algorithm for convex minimization with linear constraints
- Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective
- Title not available (Why is that?)
- Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- On the \(O(1/t)\) convergence rate of the parallel descent-like method and parallel splitting augmented Lagrangian method for solving a class of variational inequalities
- A class of customized proximal point algorithms for linearly constrained convex optimization
- Improved Lagrangian-PPA based prediction correction method for linearly constrained convex optimization
- On convergence of the Arrow-Hurwicz method for saddle point problems
- A prediction-correction-based primal-dual hybrid gradient method for linearly constrained convex minimization
- From the projection and contraction methods for variational inequalities to the splitting contraction methods for convex optimization
Cited In (1)
This page was built for publication: Two convergent primal-dual hybrid gradient type methods for convex programming with linear constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6100752)