Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
From MaRDI portal
Publication:3548002
DOI10.1109/TIT.2006.885507zbMATH Open1309.94033arXivmath/0410542OpenAlexW2129638195WikidataQ56813489 ScholiaQ56813489MaRDI QIDQ3548002FDOQ3548002
Authors: Emmanuel J. Candès, Terence Tao
Publication date: 21 December 2008
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Suppose we are given a vector in . How many linear measurements do we need to make about to be able to recover to within precision in the Euclidean () metric? Or more exactly, suppose we are interested in a class of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal decay like a power-law (or if the coefficient sequence of in a fixed basis decays like a power-law), then it is possible to reconstruct to within very high accuracy from a small number of random measurements.
Full work available at URL: https://arxiv.org/abs/math/0410542
Recommendations
Cited In (only showing first 100 items - show all)
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- Approximation with random bases: pro et contra
- Sparse identification of posynomial models
- The numerics of phase retrieval
- Spectral dynamics and regularization of incompletely and irregularly measured data
- Collaborative block compressed sensing reconstruction with dual-domain sparse representation
- Rapid, large-scale, and effective detection of COVID-19 via non-adaptive testing
- Data science, big data and statistics
- Signal separation under coherent dictionaries and \(\ell_p\)-bounded noise
- Reconstruction and subgaussian processes
- On sparse representation of analytic signal in Hardy space
- Wavelet denoising via sparse representation
- Approximation of frame based missing data recovery
- Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
- Cross validation in Lasso and its acceleration
- Self-calibration and biconvex compressive sensing
- Slope meets Lasso: improved oracle bounds and optimality
- Low complexity regularization of linear inverse problems
- Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness
- Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms
- Estimation of block sparsity in compressive sensing
- Reconstruction of systems with impulses and delays from time series data
- Randomized interpolative decomposition of separated representations
- Construction of a full row-rank matrix system for multiple scanning directions in discrete tomography
- A fast recovery method of 2D geometric compressed sensing signal
- Compressive sampling and rapid reconstruction of broadband frequency hopping signals with interference
- DFT spectrum-sparsity-based quasi-periodic signal identification and application
- Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction
- Analysis of sparse MIMO radar
- Rapid compressed sensing reconstruction: a semi-tensor product approach
- Analog random coding
- Effect of sensing matrices on quality index parameters for block sparse bayesian learning-based EEG compressive sensing
- Large deviations, dynamics and phase transitions in large stochastic and disordered neural networks
- On support sizes of restricted isometry constants
- An efficient privacy-preserving compressive data gathering scheme in WSNs
- A new sparse recovery method for the inverse acoustic scattering problem
- Preserving injectivity under subgaussian mappings and its application to compressed sensing
- Idempotents and compressive sampling
- Intrinsic modeling of stochastic dynamical systems using empirical geometry
- Signal recovery under mutual incoherence property and oracle inequalities
- A simple and feasible method for a class of large-scale \(l^1\)-problems
- Beyond sparsity: the role of \(L_{1}\)-optimizer in pattern classification
- CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion
- Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm
- A simple Gaussian measurement bound for exact recovery of block-sparse signals
- Improved bounds for sparse recovery from subsampled random convolutions
- Sparse representation of signals in Hardy space
- Compressive sensing with redundant dictionaries and structured measurements
- Discrete uncertainty principles and sparse signal processing
- Coorbit theory, multi-\(\alpha \)-modulation frames, and the concept of joint sparsity for medical multichannel data analysis
- Sparse approximate solution of partial differential equations
- On the volume of unit balls of finite-dimensional Lorentz spaces
- The stochastic geometry of unconstrained one-bit data compression
- Signal recovery under cumulative coherence
- Sparse signal representation by adaptive non-uniform B-spline dictionaries on a compact interval
- Sparse recovery from inaccurate saturated measurements
- On the uniqueness of sparse time-frequency representation of multiscale data
- Sparse time-frequency decomposition for multiple signals with same frequencies
- Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO
- Extraction of intrawave signals using the sparse time-frequency representation method
- A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials
- On generating optimal sparse probabilistic Boolean networks with maximum entropy from a positive stationary distribution.
- A theoretical result of sparse signal recovery via alternating projection method
- Robust multi-image processing with optimal sparse regularization
- Compressive sensing for multi-static scattering analysis
- Sparse approximate solution of fitting surface to scattered points by MLASSO model
- Low-rank matrix completion in a general non-orthogonal basis
- Convergence rates of learning algorithms by random projection
- Comparison of parametric sparse recovery methods for ISAR image formation
- Stability of the elastic net estimator
- Prediction of protein-protein interaction by metasample-based sparse representation
- Expander \(\ell_0\)-decoding
- TV+TV regularization with nonconvex sparseness-inducing penalty for image restoration
- Linearized alternating directions method for \(\ell_1\)-norm inequality constrained \(\ell_1\)-norm minimization
- Noisy 1-bit compressive sensing: models and algorithms
- Sampling in the analysis transform domain
- Introducing the counter mode of operation to compressed sensing based encryption
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
- From compression to compressed sensing
- Sparse sensor placement optimization for classification
- Primal-dual first-order methods for a class of cone programming
- Generalizing CoSaMP to signals from a union of low dimensional linear subspaces
- Representation and coding of signal geometry
- A two-step iterative algorithm for sparse hyperspectral unmixing via total variation
- Enhanced total variation minimization for stable image reconstruction
- The matrix splitting based proximal fixed-point algorithms for quadratically constrained \(\ell_{1}\) minimization and Dantzig selector
- Analysis of the equivalence relationship between \(l_{0}\)-minimization and \(l_{p}\)-minimization
- The recovery guarantee for orthogonal matching pursuit method to reconstruct sparse polynomials
- Compressed sensing with structured sparsity and structured acquisition
- Smoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimization
- From low- to high-dimensional moments without magic
- A hierarchical framework for recovery in compressive sensing
- Local variable selection of nonlinear nonparametric systems by first order expansion
- Deterministic construction of compressed sensing matrices based on semilattices
- Sparse probit linear mixed model
- Two new lower bounds for the spark of a matrix
- The local convexity of solving systems of quadratic equations
- Sparsity and incoherence in orthogonal matching pursuit
- Compressive sampling of ensembles of correlated signals
- Strengthening hash families and compressive sensing
This page was built for publication: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3548002)