PPA-like contraction methods for convex optimization: a framework using variational inequality approach
From MaRDI portal
Publication:259109
DOI10.1007/S40305-015-0108-9zbMATH Open1332.65084OpenAlexW2277300184MaRDI QIDQ259109FDOQ259109
Authors: Bingsheng He
Publication date: 11 March 2016
Published in: Journal of the Operations Research Society of China (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40305-015-0108-9
Recommendations
- scientific article; zbMATH DE number 6270284
- A uniform framework of contraction methods for convex optimization and monotone variational inequality
- On relaxation of some customized proximal point algorithms for convex minimization: from variational inequality perspective
- Proximal-like contraction methods for monotone variational inequalities in a unified framework. II: General methods and numerical experiments
- Proximal-point algorithm using a linear proximal term
Numerical optimization and variational techniques (65K10) Convex programming (90C25) Nonlinear programming (90C30)
Cites Work
- Computing the nearest correlation matrix--a problem from finance
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- A Singular Value Thresholding Algorithm for Matrix Completion
- Gradient methods for minimizing composite functions
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Matrix completion via an alternating direction method
- Monotone Operators and the Proximal Point Algorithm
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Title not available (Why is that?)
- A customized proximal point algorithm for convex minimization with linear constraints
- Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective
- On the convergence rate of customized proximal point algorithm for convex optimization and saddle-point problem
- On the convergence of primal-dual hybrid gradient algorithm
- Title not available (Why is that?)
- Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach
Cited In (21)
- On relaxation of some customized proximal point algorithms for convex minimization: from variational inequality perspective
- Title not available (Why is that?)
- Accelerated primal-dual methods with adaptive parameters for composite convex optimization with linear constraints
- A primal-dual multiplier method for total variation image restoration
- A distributed regularized Jacobi-type ADMM-method for generalized Nash equilibrium problems in Hilbert spaces
- Alternating direction method of multipliers for nonconvex log total variation image restoration
- A class of customized proximal point algorithms for linearly constrained convex optimization
- Proximal-like contraction methods for monotone variational inequalities in a unified framework. II: General methods and numerical experiments
- Two convergent primal-dual hybrid gradient type methods for convex programming with linear constraints
- Optimally linearizing the alternating direction method of multipliers for convex programming
- A modified Chambolle-Pock primal-dual algorithm for Poisson noise removal
- A proximal augmented method for semidefinite programming problems
- A primal-dual algorithm framework for convex saddle-point optimization
- Two new customized proximal point algorithms without relaxation for linearly constrained convex optimization
- Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization
- An indefinite proximal Peaceman-Rachford splitting method with substitution procedure for convex programming
- ADMM-type methods for generalized Nash equilibrium problems in Hilbert spaces
- A uniform framework of contraction methods for convex optimization and monotone variational inequality
- On the optimal proximal parameter of an ADMM-like splitting method for separable convex programming
- Improved Lagrangian-PPA based prediction correction method for linearly constrained convex optimization
- An LQP-based two-step method for structured variational inequalities
This page was built for publication: PPA-like contraction methods for convex optimization: a framework using variational inequality approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q259109)