The convex geometry of linear inverse problems
From MaRDI portal
Publication:1928276
DOI10.1007/s10208-012-9135-7zbMath1280.52008arXiv1012.0621OpenAlexW3124617746WikidataQ114830779 ScholiaQ114830779MaRDI QIDQ1928276
Benjamin Recht, Alan S. Willsky, Venkat Chandrasekaran, Pablo A. Parrilo
Publication date: 3 January 2013
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1012.0621
Geometric probability and stochastic geometry (60D05) Semidefinite programming (90C22) Convex programming (90C25) Convex functions and convex programs in convex geometry (52A41) Approximation by arbitrary linear expressions (41A45)
Related Items
A Simple Tool for Bounding the Deviation of Random Matrices on Geometric Sets, Randomized numerical linear algebra: Foundations and algorithms, Greedy Algorithm Almost Dominates in Smoothed Contextual Bandits, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, On the universality of noiseless linear estimation with respect to the measurement matrix, Identifying 3D Genome Organization in Diploid Organisms via Euclidean Distance Geometry, Unnamed Item, Dimension-free bounds for largest singular values of matrix Gaussian series, Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems, The Geometry of Rank-One Tensor Completion, Sharp global convergence guarantees for iterative nonconvex optimization with random data, Unnamed Item, Robust Recovery of Low-Rank Matrices and Low-Tubal-Rank Tensors from Noisy Sketches, A unified approach to uniform signal recovery from nonlinear observations, Asymptotic linear convergence of fully-corrective generalized conditional gradient methods, Sampling rates for \(\ell^1\)-synthesis, An Epigraphical Approach to the Representer Theorem, A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization, Noisy linear inverse problems under convex constraints: exact risk asymptotics in high dimensions, The Lasso with general Gaussian designs with applications to hypothesis testing, Guarantees of total variation minimization for signal recovery, Model selection with low complexity priors, Time for dithering: fast and quantized random embeddings via the restricted isometry property, Super-resolution radar, Convex geometry of quantum resource quantification, Stable low-rank matrix recovery via null space properties, Symmetric Tensor Nuclear Norms, Polynomial Norms, TV-based reconstruction of periodic functions, Multichannel blind deconvolution via maximum likelihood estimator: application in neural recordings, Unnamed Item, Persistent homology for low-complexity models, Computational and statistical tradeoffs via convex relaxation, The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Cut Pursuit: Fast Algorithms to Learn Piecewise Constant Functions on General Weighted Graphs, Bayesian computation: a summary of the current state, and samples backwards and forwards, A Data-Independent Distance to Infeasibility for Linear Conic Systems, Modular proximal optimization for multidimensional total-variation regularization, Adaptive Low-Nonnegative-Rank Approximation for State Aggregation of Markov Chains, Semidefinite Representations of Gauge Functions for Structured Low-Rank Matrix Decomposition, Graphical Model Selection for Gaussian Conditional Random Fields in the Presence of Latent Variables, On Representer Theorems and Convex Regularization, Sparse Inverse Problems over Measures: Equivalence of the Conditional Gradient and Exchange Methods, Isolated calmness of solution mappings and exact recovery conditions for nuclear norm optimization problems, A Derivative-Free Method for Structured Optimization Problems, A Convex Approach to Superresolution and Regularization of Lines in Images, Generalized Conditional Gradient for Sparse Estimation, Regularization and the small-ball method II: complexity dependent error rates, One-bit compressed sensing via ℓ p (0 < p < 1)-minimization method, Unnamed Item, On the Convergence Rate of Projected Gradient Descent for a Back-Projection Based Objective, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Signal Decomposition Using Masked Proximal Operators, Fundamental barriers to high-dimensional regression with convex penalties, Average-case complexity without the black swans, Convex optimization on Banach spaces, A geometrical stability condition for compressed sensing, Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint, One condition for solution uniqueness and robustness of both \(\ell_1\)-synthesis and \(\ell_1\)-analysis minimizations, Improved bounds for sparse recovery from subsampled random convolutions, Learning by atomic norm regularization with polynomial kernels, Self-scaled bounds for atomic cone ranks: applications to nonnegative rank and cp-rank, Geometric inference for general high-dimensional linear inverse problems, Book Review: A mathematical introduction to compressive sensing, Noisy tensor completion via the sum-of-squares hierarchy, Compressed Sensing, Sparse Inversion, and Model Mismatch, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication, Cosparsity in Compressed Sensing, Structured Sparsity: Discrete and Convex Approaches, Sharp MSE bounds for proximal denoising, Low-Rank Inducing Norms with Optimality Interpretations, The restricted isometry property of block diagonal matrices for group-sparse signal recovery, Biorthogonal greedy algorithms in convex optimization, Optimization Methods for Synthetic Aperture Radar Imaging, Low rank matrix recovery from rank one measurements, Recovery analysis for weighted \(\ell_{1}\)-minimization using the null space property, High-dimensional change-point estimation: combining filtering with convex optimization, Recovery error analysis of noisy measurement in compressed sensing, Screening for a reweighted penalized conditional gradient method, Generalized notions of sparsity and restricted isometry property. II: Applications, Almost everywhere injectivity conditions for the matrix recovery problem, When does OMP achieve exact recovery with continuous dictionaries?, \(\ell^1\)-analysis minimization and generalized (co-)sparsity: when does recovery succeed?, On Efficiently Solving the Subproblems of a Level-Set Method for Fused Lasso Problems, Compressed sensing of data with a known distribution, Generalizing CoSaMP to signals from a union of low dimensional linear subspaces, Simple bounds for recovering low-complexity models, Convex optimization in sums of Banach spaces, Multicompartment magnetic resonance fingerprinting, Quadratic growth conditions and uniqueness of optimal solution to Lasso, Generic error bounds for the generalized Lasso with sub-exponential data, Fast and Reliable Parameter Estimation from Nonlinear Observations, Learning with optimal interpolation norms, Sharp recovery bounds for convex demixing, with applications, Tuning complexity in regularized kernel-based regression and linear system identification: the robustness of the marginal likelihood estimator, Robust analysis ℓ1-recovery from Gaussian measurements and total variation minimization, The minimal measurement number for low-rank matrix recovery, Blind three dimensional deconvolution via convex optimization, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Terracini convexity, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, Analysis \(\ell_1\)-recovery with frames and Gaussian measurements, From compression to compressed sensing, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Atomic norm minimization for decomposition into complex exponentials and optimal transport in Fourier domain, Linear regression with sparsely permuted data, Adaptive confidence sets in shape restricted regression, Optimal rates of statistical seriation, Kernel methods in system identification, machine learning and function estimation: a survey, Complex phase retrieval from subgaussian measurements, An Introduction to Compressed Sensing, A new perspective on least squares under convex constraint, Greedy expansions in convex optimization, Learning semidefinite regularizers, A convex variational model for learning convolutional image atoms from incomplete data, On the solution uniqueness characterization in the L1 norm and polyhedral gauge recovery, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all, Decomposable norm minimization with proximal-gradient homotopy algorithm, Stable separation and super-resolution of mixture models, Sharp oracle inequalities for least squares estimators in shape restricted regression, A Tight Bound of Hard Thresholding, Subspace clustering by \((k,k)\)-sparse matrix factorization, Tightness of the maximum likelihood semidefinite relaxation for angular synchronization, Energy on spheres and discreteness of minimizing measures, Sparsity of solutions for variational inverse problems with finite-dimensional data, Super-resolution by means of Beurling minimal extrapolation, Riemannian gradient descent methods for graph-regularized matrix completion, Estimation in High Dimensions: A Geometric Perspective, Convex Recovery of a Structured Signal from Independent Random Linear Measurements, Low Complexity Regularization of Linear Inverse Problems, System identification using kernel-based regularization: new insights on stability and consistency issues, Phase recovery, MaxCut and complex semidefinite programming, On risk bounds in isotonic and other shape restricted regression problems, Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction, Superresolution 2D DOA estimation for a rectangular array via reweighted decoupled atomic norm minimization, Tensor theta norms and low rank recovery, Approximate support recovery of atomic line spectral estimation: a tale of resolution and precision, Generalization bounds for learning with linear, polygonal, quadratic and conic side knowledge, Sharp oracle inequalities for low-complexity priors, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Provably optimal sparse solutions to overdetermined linear systems with non-negativity constraints in a least-squares sense by implicit enumeration, Efficient proximal mapping computation for low-rank inducing norms, Super-resolution for doubly-dispersive channel estimation, Preserving injectivity under subgaussian mappings and its application to compressed sensing, Optimizing optimization: accurate detection of hidden interactions in active body systems from noisy data, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, Small-deviation inequalities for sums of random matrices, On model selection consistency of regularized M-estimators, Greedy approximation in convex optimization, Proof methods for robust low-rank matrix recovery, Analysis of sparse recovery algorithms via the replica method, Hierarchical isometry properties of hierarchical measurements
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor Decompositions and Applications
- Fixed point and Bregman iterative methods for matrix rank minimization
- Null space conditions and thresholds for rank minimization
- Combinatorics of random processes and sections of convex bodies
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Optimization. Algorithms and consistent approximations
- Introductory lectures on convex optimization. A basic course.
- Semidefinite programming relaxations for semialgebraic problems
- Some remarks on greedy algorithms
- On the conditions used to prove oracle results for the Lasso
- Probability of unique integer solution to a system of linear equations
- Counting the faces of randomly-projected hypercubes and orthants, with applications
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- Exact matrix completion via convex optimization
- Orthogonal Tensor Decompositions
- Robust principal component analysis?
- Linearized Bregman iterations for compressed sensing
- A Singular Value Thresholding Algorithm for Matrix Completion
- Theta Bodies for Polynomial Ideals
- Counting faces of randomly projected polytopes when the projection radically lowers dimension
- Rank-Sparsity Incoherence for Matrix Decomposition
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- An EM algorithm for wavelet-based image restoration
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Decoding by Linear Programming
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- A generalized proximal point algorithm for certain non-convex minimization problems
- A GENERAL ATOMIC DECOMPOSITION THEOREM AND BANACH'S CLOSED RANGE THEOREM
- Universal approximation bounds for superpositions of a sigmoidal function
- A random polynomial-time algorithm for approximating the volume of convex bodies
- Lectures on Polytopes
- Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
- Sparse Reconstruction by Separable Approximation
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Inferring Rankings Using Constrained Sensing
- Precise Stability Phase Transitions for $\ell_1$ Minimization: A Unified Geometric Framework
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Toeplitz Compressed Sensing Matrices With Applications to Sparse Channel Estimation
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- Sparse nonnegative solution of underdetermined linear equations by linear programming
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Approximating the Cut-Norm via Grothendieck's Inequality
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Signal Recovery by Proximal Forward-Backward Splitting
- Convex Analysis
- Compressed sensing
- Geometry of cuts and metrics
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers