Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
From MaRDI portal
Publication:3548002
DOI10.1109/TIT.2006.885507zbMATH Open1309.94033arXivmath/0410542OpenAlexW2129638195WikidataQ56813489 ScholiaQ56813489MaRDI QIDQ3548002FDOQ3548002
Authors: Emmanuel J. Candès, Terence Tao
Publication date: 21 December 2008
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Suppose we are given a vector in . How many linear measurements do we need to make about to be able to recover to within precision in the Euclidean () metric? Or more exactly, suppose we are interested in a class of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal decay like a power-law (or if the coefficient sequence of in a fixed basis decays like a power-law), then it is possible to reconstruct to within very high accuracy from a small number of random measurements.
Full work available at URL: https://arxiv.org/abs/math/0410542
Recommendations
Cited In (only showing first 100 items - show all)
- Adaptive compressive learning for prediction of protein-protein interactions from primary sequence
- Sparse recovery properties of discrete random matrices
- Convergence of a data-driven time-frequency analysis method
- Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
- A simple and flexible model order reduction method for FFT-based homogenization problems using a sparse sampling technique
- Reconstructed error and linear representation coefficients restricted by \(\ell_1\)-minimization for face recognition under different illumination and occlusion
- Sparse approximation of fitting surface by elastic net
- Sparse sensing and DMD-based identification of flow regimes and bifurcations in complex flows
- Divide and conquer: an incremental sparsity promoting compressive sampling approach for polynomial chaos expansions
- Recursion for the smallest eigenvalue density of \(\beta \)-Wishart-Laguerre ensemble
- Joint image compression-encryption scheme using entropy coding and compressive sensing
- An overview on the applications of matrix theory in wireless communications and signal processing
- Recovery analysis for weighted mixed \(\ell_2 / \ell_p\) minimization with \(0 < p \leq 1\)
- Reconstruction of sparse signals in impulsive disturbance environments
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- Weighted \(\ell_1\)-minimization for sparse recovery under arbitrary prior information
- Sparse recovery in probability via \(l_q\)-minimization with Weibull random matrices for \(0 < q\leq 1\)
- Spark-level sparsity and the \(\ell_1\) tail minimization
- Phase retrieval from Fourier measurements with masks
- A coordinate descent homotopy method for linearly constrained nonsmooth convex minimization
- Multiscale blind source separation
- DC approximation approach for \(\ell_0\)-minimization in compressed sensing
- Iterative re-weighted least squares algorithm for \(l_p\)-minimization with tight frame and \(0 < p \leq 1\)
- On the Doubly Sparse Compressed Sensing Problem
- On the structure of time-delay embedding in linear models of non-linear dynamical systems
- On perturbed steepest descent methods with inexact line search for bilevel convex optimization
- Augmented sparse reconstruction of protein signaling networks
- Self-adaptive image reconstruction inspired by insect compound eye mechanism
- Robust group lasso: model and recoverability
- Combined similarity to reference image with joint sparsifying transform for longitudinal compressive sensing MRI
- Efficient extreme learning machine via very sparse random projection
- Channel estimation for finite scatterers massive multi-user MIMO system
- Model selection with distributed SCAD penalty
- Detecting a vector based on linear measurements
- A recursive procedure for density estimation on the binary hypercube
- Modern regularization methods for inverse problems
- Volumes of unit balls of mixed sequence spaces
- Random matrices and erasure robust frames
- Elastic-net regularization versus ℓ 1 -regularization for linear inverse problems with quasi-sparse solutions
- A novel compressed sensing scheme for photoacoustic tomography
- The coefficient regularized regression with random projection
- Frames as codes
- Accelerating near-field 3D imaging approach for joint high-resolution imaging and phase error correction
- Optimization methods for regularization-based ill-posed problems: a survey and a multi-objective framework
- Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing
- An enhanced diagnosis method for weak fault features of bearing acoustic emission signal based on compressed sensing
- Parallel magnetic resonance imaging acceleration with a hybrid sensing approach
- Multilevel preconditioning and adaptive sparse solution of inverse problems
- Sparse approximation using \(\ell_1-\ell_2\) minimization and its application to stochastic collocation
- Stochastic collocation methods via \(\ell_1\) minimization using randomized quadratures
- Sparse signal reconstruction based on multiparameter approximation function with smoothed \(\ell_0\) norm
- Roles of clustering coefficient for the network reconstruction
- Median filter based compressed sensing model with application to MR image reconstruction
- Compressed sensing of color images
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Additive combinatorics: with a view towards computer science and cryptography -- an exposition
- Signature codes for weighted noisy adder channel, multimedia fingerprinting and compressed sensing
- Learning semidefinite regularizers
- New analysis of manifold embeddings and signal recovery from compressive measurements
- Generalized sampling and infinite-dimensional compressed sensing
- Yang-Baxter equations in quantum information
- A least-squares method for sparse low rank approximation of multivariate functions
- Stochastic collocation algorithms using \(l_1\)-minimization for Bayesian solution of inverse problems
- On the generation of sampling schemes for magnetic resonance imaging
- Generalized Kalman smoothing: modeling and algorithms
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Sparsity enforcing edge detection method for blurred and noisy Fourier data
- Uniform recovery in infinite-dimensional compressed sensing and applications to structured binary sampling
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Fast and RIP-optimal transforms
- Near oracle performance and block analysis of signal space greedy methods
- Improved sparse Fourier approximation results: Faster implementations and stronger guarantees
- 2D sparse signal recovery via 2D orthogonal matching pursuit
- Greedy-like algorithms for the cosparse analysis model
- Instance-optimality in probability with an \(\ell _1\)-minimization decoder
- Sparse system identification for stochastic systems with general observation sequences
- Compressed sensing by inverse scale space and curvelet thresholding
- Compressed sensing with preconditioning for sparse recovery with subsampled matrices of Slepian prolate functions
- Nonlinear least squares in \(\mathbb R^{N}\)
- Improving the \( k\)-\textit{compressibility} of hyper reduced order models with moving sources: applications to welding and phase change problems
- Optimization methods for synthetic aperture radar imaging
- Fast \(\ell _{1}\) minimization by iterative thresholding for multidimensional NMR spectroscopy
- Majorizing measures and proportional subsets of bounded orthonormal systems
- Title not available (Why is that?)
- Robust face recognition via block sparse Bayesian learning
- Nonmonotone Barzilai-Borwein gradient algorithm for \(\ell_1\)-regularized nonsmooth minimization in compressive sensing
- A modified Newton projection method for \(\ell _1\)-regularized least squares image deblurring
- Sparse time-frequency representation of nonlinear and nonstationary data
- A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization
- Compressive optical deflectometric tomography: a constrained total-variation minimization approach
- Deterministic construction of sparse binary matrices via incremental integer optimization
- A convergent overlapping domain decomposition method for total variation minimization
- Log-concavity and strong log-concavity: a review
- Gaussian averages of interpolated bodies and applications to approximate reconstruction
- Compressed subspace matching on the continuum
- High-dimensional inference in misspecified linear models
- On the null space property of \(l_q\)-minimization for \(0 < q \leq 1\) in compressed sensing
This page was built for publication: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3548002)