An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems
Publication:1702597
DOI10.1007/S10851-017-0709-5zbMath1387.90186OpenAlexW2589182068MaRDI QIDQ1702597
Feng Ma, Xiao-Ming Yuan, Bing-sheng He
Publication date: 28 February 2018
Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10851-017-0709-5
variational inequalitiesconvex programmingconvergence ratesaddle point problemimage restorationvariational modelsprimal-dual hybrid gradient method
Convex programming (90C25) Computing methodologies for image processing (68U10) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08)
Related Items (19)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- On the ergodic convergence rates of a first-order primal-dual algorithm
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A unified primal-dual algorithm framework based on Bregman iteration
- An algorithm for total variation minimization and applications
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration
- On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- Efficient Schemes for Total Variation Minimization Under Constraints in Image Processing
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Modified Lagrangians in convex programming and their generalizations
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions
- On the Convergence of Primal-Dual Hybrid Gradient Algorithm
This page was built for publication: An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems