Stable signal recovery from incomplete and inaccurate measurements
From MaRDI portal
Publication:5486267
Statistical aspects of information-theoretic topics (62B10) Numerical optimization and variational techniques (65K10) Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08) Rate-distortion theory in information and communication theory (94A34)
Abstract: Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min |x|_1 subject to |Ax-y|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level |x* - x_0|_2 le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
Recommendations
Cited in
(only showing first 100 items - show all)- Compressed sensing
- Sparse approximation using \(\ell_1-\ell_2\) minimization and its application to stochastic collocation
- Stochastic collocation methods via \(\ell_1\) minimization using randomized quadratures
- A new computational method for the sparsest solutions to systems of linear equations
- A statistical approach to the problem of restoring damaged and contaminated images
- Sparse recovery by reduced variance stochastic approximation
- Weak stability of \(\ell_1\)-minimization methods in sparse data reconstruction
- Adaptive compressive learning for prediction of protein-protein interactions from primary sequence
- Sparse signals recovered by non-convex penalty in quasi-linear systems
- Linearly involved generalized Moreau enhanced models and their proximal splitting algorithm under overall convexity condition
- Empirical average-case relation between undersampling and sparsity in X-ray CT
- Periodic spline-based frames for image restoration
- Compressive Sensing
- Hierachical Bayesian models and sparsity: \(\ell_2\)-magic
- A simple and flexible model order reduction method for FFT-based homogenization problems using a sparse sampling technique
- A data-driven framework for sparsity-enhanced surrogates with arbitrary mutually dependent randomness
- Convergence of a data-driven time-frequency analysis method
- Enabling numerically exact local solver for waveform inversion -- a low-rank approach
- Compressed-sensing-based gradient reconstruction for ghost imaging
- Compressive total variation for image reconstruction and restoration
- Iterative hard thresholding for compressed sensing
- Fast convex pruning of deep neural networks
- Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines
- A simple proof of the restricted isometry property for random matrices
- Signal reconstruction by conjugate gradient algorithm based on smoothing \(l_1\)-norm
- Nonuniqueness of solutions of a class of \(\ell_0\)-minimization problems
- GenMod: a generative modeling approach for spectral representation of PDEs with random inputs
- Choosing relaxation parameter in randomized Kaczmarz method
- Image reconstruction based on nonlinear diffusion model for limited-angle computed tomography
- Exact minimum rank approximation via Schatten p-norm minimization
- Approximation of solutions with singularities of various types for linear ill-posed problems
- Remote sensing via _1-minimization
- Some sharp performance bounds for least squares regression with L₁ regularization
- PROMP: a sparse recovery approach to lattice-valued signals
- Reconstruction of sparse connectivity in neural networks from spike train covariances
- A regularizing multilevel approach for nonlinear inverse problems
- Nonlinear frames and sparse reconstructions in Banach spaces
- Compressive sensing based machine learning strategy for characterizing the flow around a cylinder with limited pressure measurements
- Atmospheric radar imaging improvements using compressed sensing and MIMO
- Spectral dynamics and regularization of incompletely and irregularly measured data
- Linear total variation approximate regularized nuclear norm optimization for matrix completion
- A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions
- A preconditioning approach for improved estimation of sparse polynomial chaos expansions
- Recovery of localised structure from signals with non-sparse components
- Sparse Legendre expansions via _1-minimization
- Strong convergence of a modified proximal algorithm for solving the lasso
- Block-sparse compressed sensing: non-convex model and iterative re-weighted algorithm
- Observability for initial value problems with sparse initial data
- Two-dimensional random projection
- Compression and Conditional Emulation of Climate Model Output
- The modified accelerated Bregman method for regularized basis pursuit problem
- Increasing the semantic storage density of sparse distributed memory
- Generalization bounds for sparse random feature expansions
- Sparse recovery of sound fields using measurements from moving microphones
- Sparsity and incoherence in compressive sampling
- Iteratively weighted thresholding homotopy method for the sparse solution of underdetermined linear equations
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Sparse modeling approach to obtaining the shear viscosity from smeared correlation functions
- Inverse potential problems for divergence of measures with total variation regularization
- Tight and full spark Chebyshev frames with real entries and worst-case coherence analysis
- An unbiased approach to compressed sensing
- A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization
- Bias versus non-convexity in compressed sensing
- Sampling strategies for uncertainty reduction in categorical random fields: formulation, mathematical analysis and application to multiple-point simulations
- Joint-block-sparsity for efficient 2-D DOA estimation with multiple separable observations
- Reconstructed error and linear representation coefficients restricted by \(\ell_1\)-minimization for face recognition under different illumination and occlusion
- Sparse signal inversion with impulsive noise by dual spectral projected gradient method
- Sparse approximation of fitting surface by elastic net
- Sparse regression: scalable algorithms and empirical performance
- Localized Fourier analysis for graph signal processing
- Signal recovery from incomplete measurements in the presence of outliers
- Gradient projection Newton pursuit for sparsity constrained optimization
- A new linearized split Bregman iterative algorithm for image reconstruction in sparse-view X-ray computed tomography
- Sparse subsampling of flow measurements for finite-time Lyapunov exponent in domains with obstacles
- Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements
- A note on the minimization of a Tikhonov functional with \(\ell^1\)-penalty
- Additive combinatorics: with a view towards computer science and cryptography -- an exposition
- Querying a Matrix Through Matrix-Vector Products.
- LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing
- \(\mathrm{L_1RIP}\)-based robust compressed sensing
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution
- Recovery of sparsest signals via \(\ell^q \)-minimization
- Data science, big data and statistics
- Generalized Mercer kernels and reproducing kernel Banach spaces
- Nonoverlapping convex polytopes with vertices in a Boolean cube and other problems in coding theory
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- A LONE code for the sparse control of quantum systems
- The Gelfand widths of \(\ell_p\)-balls for \(0 < p \leq 1\)
- Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator
- Learning functions of few arbitrary linear parameters in high dimensions
- On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization
- Nonconvex compressed sampling of natural images and applications to compressed MR imaging
- On monotone and primal-dual active set schemes for \(\ell^p\)-type problems, \(p \in (0,1]\)
- Robust estimation for an inverse problem arising in multiview geometry
- On the role of total variation in compressed sensing
- Sparse matrices in frame theory
- On the size of incoherent systems
- Sparse approximate solution of partial differential equations
- Adaptive iterative hard thresholding for least absolute deviation problems with sparsity constraints
- Reconstruction of sparse recurrent connectivity and inputs from the nonlinear dynamics of neuronal networks
This page was built for publication: Stable signal recovery from incomplete and inaccurate measurements
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5486267)