Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
From MaRDI portal
Publication:3548002
DOI10.1109/TIT.2006.885507zbMATH Open1309.94033arXivmath/0410542OpenAlexW2129638195WikidataQ56813489 ScholiaQ56813489MaRDI QIDQ3548002FDOQ3548002
Authors: Emmanuel J. Candès, Terence Tao
Publication date: 21 December 2008
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Suppose we are given a vector in . How many linear measurements do we need to make about to be able to recover to within precision in the Euclidean () metric? Or more exactly, suppose we are interested in a class of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal decay like a power-law (or if the coefficient sequence of in a fixed basis decays like a power-law), then it is possible to reconstruct to within very high accuracy from a small number of random measurements.
Full work available at URL: https://arxiv.org/abs/math/0410542
Recommendations
Cited In (only showing first 100 items - show all)
- Cirrhosis classification based on texture classification of random features
- Codes for exact support recovery of sparse vectors from inaccurate linear measurements and their decoding
- Structured random measurements in signal processing
- A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions
- Compressive sensing based machine learning strategy for characterizing the flow around a cylinder with limited pressure measurements
- Remote sensing via \(\ell_1\)-minimization
- \(\mathrm{L_1RIP}\)-based robust compressed sensing
- Nonconvex compressed sampling of natural images and applications to compressed MR imaging
- Convergence of the linearized Bregman iteration for \(\ell _1\)-norm minimization
- On the linear independence of spikes and sines
- A dual split Bregman method for fast \(\ell ^{1}\) minimization
- An improved fast iterative shrinkage thresholding algorithm for image deblurring
- RIPless compressed sensing from anisotropic measurements
- Deep Learning--Based Dictionary Learning and Tomographic Image Reconstruction
- Approximation error in regularized SVD-based Fourier continuations
- Derandomized compressed sensing with nonuniform guarantees for \(\ell_1\) recovery
- Adaptive data analysis via sparse time-frequency representation
- Two are better than one: fundamental parameters of frame coherence
- A coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimization
- A box constrained gradient projection algorithm for compressed sensing
- Compressed sensing image restoration based on data-driven multi-scale tight frame
- Conjugate gradient acceleration of iteratively re-weighted least squares methods
- Compressed sensing and dynamic mode decomposition
- Off-grid DOA estimation via real-valued sparse Bayesian method in compressed sensing
- An adaptive inverse scale space method for compressed sensing
- Compressed sensing from a harmonic analysis point of view
- On uncertainty principles in the finite dimensional setting
- Compressed sensing with coherent and redundant dictionaries
- Deterministic convolutional compressed sensing matrices
- Effective band-limited extrapolation relying on Slepian series and \(\ell^1\) regularization
- Restricted isometries for partial random circulant matrices
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- Optimal non-linear models for sparsity and sampling
- Combinatorial Algorithms for Compressed Sensing
- Two-dimensional digital filters with sparse coefficients
- Simple bounds for recovering low-complexity models
- Accelerated projected gradient method for linear inverse problems with sparsity constraints
- Random sampling of sparse trigonometric polynomials
- Empirical processes with a bounded \(\psi_1\) diameter
- Interpolation via weighted \(\ell_{1}\) minimization
- On the conditioning of random subdictionaries
- Ensemble extreme learning machine and sparse representation classification
- A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions
- Theory of compressive sensing via \(\ell_1\)-minimization: a non-RIP analysis and extensions
- Random sampling of sparse trigonometric polynomials. II: Orthogonal matching pursuit versus basis pursuit
- Low rank matrix recovery from rank one measurements
- Recovery of low-rank matrices based on the rank null space properties
- Compressive wave computation
- A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations. II: Adaptivity and generalizations
- Quasi-linear compressed sensing
- Multiscale stochastic preconditioners in non-intrusive spectral projection
- Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling
- A novel cognitive ISAR imaging method with random stepped frequency chirp signal
- Influence factors of sparse microwave imaging radar system performance: approaches to waveform design and platform motion analysis
- Compressed sensing SAR imaging based on sparse representation in fractional Fourier domain
- Improved FOCUSS method for reconstruction of cluster structured sparse signals in radar imaging
- The essential ability of sparse reconstruction of different compressive sensing strategies
- Waveform design and high-resolution imaging of cognitive radar based on compressive sensing
- Sparse microwave imaging: principles and applications
- Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization
- Enhancing \(\ell_1\)-minimization estimates of polynomial chaos expansions using basis selection
- Compressive sensing by random convolution
- Geometric approach to error-correcting codes and reconstruction of signals
- Exact optimization for the \(\ell ^{1}\)-compressive sensing problem using a modified Dantzig-Wolfe method
- The restricted isometry property for time-frequency structured random matrices
- Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization
- Deterministic construction of compressed sensing matrices from codes
- Restricted \(p\)-isometry property and its application for nonconvex compressive sensing
- Landmark recognition with sparse representation classification and extreme learning machine
- An algorithm solving compressive sensing problem based on maximal monotone operators
- Entropic regularization of the \(\ell _{0}\) function
- Sparsity in time-frequency representations
- Compressed sensing: how sharp is the restricted isometry property?
- Compressed sensing of color images
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Additive combinatorics: with a view towards computer science and cryptography -- an exposition
- Signature codes for weighted noisy adder channel, multimedia fingerprinting and compressed sensing
- Learning semidefinite regularizers
- New analysis of manifold embeddings and signal recovery from compressive measurements
- Generalized sampling and infinite-dimensional compressed sensing
- Yang-Baxter equations in quantum information
- A least-squares method for sparse low rank approximation of multivariate functions
- Stochastic collocation algorithms using \(l_1\)-minimization for Bayesian solution of inverse problems
- On the generation of sampling schemes for magnetic resonance imaging
- Generalized Kalman smoothing: modeling and algorithms
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Sparsity enforcing edge detection method for blurred and noisy Fourier data
- Uniform recovery in infinite-dimensional compressed sensing and applications to structured binary sampling
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Fast and RIP-optimal transforms
- Near oracle performance and block analysis of signal space greedy methods
- Improved sparse Fourier approximation results: Faster implementations and stronger guarantees
- 2D sparse signal recovery via 2D orthogonal matching pursuit
- Greedy-like algorithms for the cosparse analysis model
- Instance-optimality in probability with an \(\ell _1\)-minimization decoder
- Sparse system identification for stochastic systems with general observation sequences
- Compressed sensing by inverse scale space and curvelet thresholding
This page was built for publication: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3548002)