Two convergent primal-dual hybrid gradient type methods for convex programming with linear constraints
From MaRDI portal
Publication:6100752
DOI10.1007/s41980-023-00778-4zbMath1514.90187OpenAlexW4366261617MaRDI QIDQ6100752
Maoying Tian, Min Sun, Jing Liu
Publication date: 22 June 2023
Published in: Bulletin of the Iranian Mathematical Society (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s41980-023-00778-4
Cites Work
- Unnamed Item
- Unnamed Item
- PPA-like contraction methods for convex optimization: a framework using variational inequality approach
- On the \(O(1/t)\) convergence rate of the parallel descent-like method and parallel splitting augmented Lagrangian method for solving a class of variational inequalities
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- A class of customized proximal point algorithms for linearly constrained convex optimization
- Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Improved Lagrangian-PPA based prediction correction method for linearly constrained convex optimization
- On convergence of the Arrow-Hurwicz method for saddle point problems
- A prediction-correction-based primal-dual hybrid gradient method for linearly constrained convex minimization
- A customized proximal point algorithm for convex minimization with linear constraints
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Convex Analysis
This page was built for publication: Two convergent primal-dual hybrid gradient type methods for convex programming with linear constraints