Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
From MaRDI portal
Publication:2873229
DOI10.1137/120863290zbMath1279.68329arXiv1201.4615OpenAlexW2075826622MaRDI QIDQ2873229
Publication date: 23 January 2014
Published in: SIAM Journal on Imaging Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1201.4615
matrix completioncompressed sensingsparse optimizationlow-rank matrixexact regularizationglobal linear convergence
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Computing methodologies for image processing (68U10) Interior-point methods (90C51)
Related Items
Projected shrinkage algorithm for box-constrained \(\ell _1\)-minimization, Eventual linear convergence of the Douglas-Rachford iteration for basis pursuit, Revisiting linearized Bregman iterations under Lipschitz-like convexity condition, Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems, Local linear convergence of a primal-dual algorithm for the augmented convex models, Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems, Iterative methods based on soft thresholding of hierarchical tensors, Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications, Proximal linearization methods for Schatten \(p\)-quasi-norm minimization, Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties, On the convergence of asynchronous parallel iteration with unbounded delays, Stability of the elastic net estimator, Sparse sampling Kaczmarz–Motzkin method with linear convergence, Variance reduction for root-finding problems, A flexible ADMM algorithm for big data applications, Linear convergence of the randomized sparse Kaczmarz method, A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, An improved algorithm for basis pursuit problem and its applications, Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning, Sparse + low-energy decomposition for viscous conservation laws, Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm, Low-rank matrix recovery via regularized nuclear norm minimization, Sparse recovery via differential inclusions, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, On the Convergence of Decentralized Gradient Descent, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, A time continuation based fast approximate algorithm for compressed sensing related optimization, EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization, Regularized Kaczmarz Algorithms for Tensor Recovery, Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization