Decomposable norm minimization with proximal-gradient homotopy algorithm
From MaRDI portal
Publication:513723
DOI10.1007/s10589-016-9871-8zbMath1392.90105arXiv1501.06711OpenAlexW3104908198MaRDI QIDQ513723
Publication date: 7 March 2017
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1501.06711
Related Items
Fast and Reliable Parameter Estimation from Nonlinear Observations, A simple homotopy proximal mapping algorithm for compressive sensing
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient methods for minimizing composite functions
- Simple bounds for recovering low-complexity models
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- On the complexity analysis of randomized block-coordinate descent methods
- Fixed point and Bregman iterative methods for matrix rank minimization
- Estimation of high-dimensional low-rank matrices
- Oracle inequalities and optimal inference under group sparsity
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Some inequalities for Gaussian processes and applications
- Introductory lectures on convex optimization. A basic course.
- The convex geometry of linear inverse problems
- On the conditions used to prove oracle results for the Lasso
- Sparsity oracle inequalities for the Lasso
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- A Singular Value Thresholding Algorithm for Matrix Completion
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Monotone Operators and the Proximal Point Algorithm
- Linear Convergence of Stochastic Iterative Greedy Algorithms With Sparse Constraints
- Sparse Reconstruction by Separable Approximation
- The Generic Chaining
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Recovering Low-Rank Matrices From Few Coefficients in Any Basis
- On first-order algorithms forl1/nuclear norm minimization
- Stable signal recovery from incomplete and inaccurate measurements
- Compressed sensing
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers