Fast alternating linearization methods for minimizing the sum of two convex functions
Publication:378095
DOI10.1007/S10107-012-0530-2zbMath1280.65051arXiv0912.4571OpenAlexW2010286849WikidataQ101200642 ScholiaQ101200642MaRDI QIDQ378095
Katya Scheinberg, Donald Goldfarb, Shi-Qian Ma
Publication date: 11 November 2013
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0912.4571
algorithmconvergencemaximal monotone operatorsnumerical resultsconvex optimizationGauss-Seidel methodsubgradientLipschitz constantPeaceman-Rachford methodalternating direction methodlarge-scale problemsalternating linearization methodaugmented Langrangian methoditeration complexity boundsLipschitz continuous gradientoptimal gradient methodvariable splitting
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Related Items (62)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Sparse inverse covariance estimation with the graphical lasso
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Fast first-order methods for composite convex optimization with backtracking
- Convergence of fixed-point continuation algorithms for matrix rank minimization
- Fixed point and Bregman iterative methods for matrix rank minimization
- Partial inverse of a monotone operator
- Alternating direction augmented Lagrangian methods for semidefinite programming
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- Introductory lectures on convex optimization. A basic course.
- Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities
- A new inexact alternating directions method for monotone variational inequalities
- Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming
- A family of projective splitting methods for the sum of two maximal monotone operators
- Exact matrix completion via convex optimization
- An alternating direction-based contraction method for linearly constrained separable convex programming problems
- Fast Multiple-Splitting Algorithms for Convex Optimization
- Robust principal component analysis?
- Alternating Direction Algorithms for $\ell_1$-Problems in Compressive Sensing
- The Split Bregman Method for L1-Regularized Problems
- The Numerical Solution of Parabolic and Elliptic Differential Equations
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Model selection and estimation in the Gaussian graphical model
- An EM algorithm for wavelet-based image restoration
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Solving monotone inclusions via compositions of nonexpansive averaged operators
- Proximal Decomposition Via Alternating Linearization
- Regularization Methods for Semidefinite Programming
- Matrix Completion From a Few Entries
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Fast Image Recovery Using Variable Splitting and Constrained Optimization
- An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
- Signal Recovery by Proximal Forward-Backward Splitting
- Compressed sensing
This page was built for publication: Fast alternating linearization methods for minimizing the sum of two convex functions