Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
From MaRDI portal
Publication:3548002
Abstract: Suppose we are given a vector in . How many linear measurements do we need to make about to be able to recover to within precision in the Euclidean () metric? Or more exactly, suppose we are interested in a class of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal decay like a power-law (or if the coefficient sequence of in a fixed basis decays like a power-law), then it is possible to reconstruct to within very high accuracy from a small number of random measurements.
Recommendations
Cited in
(only showing first 100 items - show all)- Self-calibration and biconvex compressive sensing
- Idempotents and compressive sampling
- Slope meets Lasso: improved oracle bounds and optimality
- Collaborative block compressed sensing reconstruction with dual-domain sparse representation
- Effect of sensing matrices on quality index parameters for block sparse bayesian learning-based EEG compressive sensing
- Intrinsic modeling of stochastic dynamical systems using empirical geometry
- Rapid, large-scale, and effective detection of COVID-19 via non-adaptive testing
- Compressive sensing with redundant dictionaries and structured measurements
- Approximation of frame based missing data recovery
- Reconstruction and subgaussian processes
- Preserving injectivity under subgaussian mappings and its application to compressed sensing
- Large deviations, dynamics and phase transitions in large stochastic and disordered neural networks
- Analog random coding
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion
- A simple Gaussian measurement bound for exact recovery of block-sparse signals
- Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm
- A fast recovery method of 2D geometric compressed sensing signal
- Compressive sampling and rapid reconstruction of broadband frequency hopping signals with interference
- DFT spectrum-sparsity-based quasi-periodic signal identification and application
- Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction
- Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
- Analysis of sparse MIMO radar
- Signal recovery under mutual incoherence property and oracle inequalities
- Discrete uncertainty principles and sparse signal processing
- Improved bounds for sparse recovery from subsampled random convolutions
- Spectral dynamics and regularization of incompletely and irregularly measured data
- The numerics of phase retrieval
- Cross validation in Lasso and its acceleration
- Estimation of block sparsity in compressive sensing
- Reconstruction of systems with impulses and delays from time series data
- Randomized interpolative decomposition of separated representations
- Construction of a full row-rank matrix system for multiple scanning directions in discrete tomography
- An efficient privacy-preserving compressive data gathering scheme in WSNs
- Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness
- A new sparse recovery method for the inverse acoustic scattering problem
- A simple and feasible method for a class of large-scale \(l^1\)-problems
- Data science, big data and statistics
- Wavelet denoising via sparse representation
- On support sizes of restricted isometry constants
- Rapid compressed sensing reconstruction: a semi-tensor product approach
- Approximation with random bases: pro et contra
- Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms
- Low complexity regularization of linear inverse problems
- On sparse representation of analytic signal in Hardy space
- Beyond sparsity: the role of \(L_{1}\)-optimizer in pattern classification
- Sparse representation of signals in Hardy space
- Sparse identification of posynomial models
- Signal separation under coherent dictionaries and \(\ell_p\)-bounded noise
- Linearized alternating directions method for \(\ell_1\)-norm inequality constrained \(\ell_1\)-norm minimization
- A new bound on the block restricted isometry constant in compressed sensing
- Sparse approximate solution of fitting surface to scattered points by MLASSO model
- Enhanced total variation minimization for stable image reconstruction
- Primal-dual first-order methods for a class of cone programming
- Robust multi-image processing with optimal sparse regularization
- A hierarchical framework for recovery in compressive sensing
- Local variable selection of nonlinear nonparametric systems by first order expansion
- Deterministic construction of compressed sensing matrices based on semilattices
- Sparse probit linear mixed model
- Recovering an unknown signal completely submerged in strong noise by a new stochastic resonance method
- Sparse signal representation by adaptive non-uniform B-spline dictionaries on a compact interval
- Sparse recovery from inaccurate saturated measurements
- Representation and coding of signal geometry
- Low-rank matrix completion in a general non-orthogonal basis
- On the uniqueness of sparse time-frequency representation of multiscale data
- The matrix splitting based proximal fixed-point algorithms for quadratically constrained \(\ell_{1}\) minimization and Dantzig selector
- TV+TV regularization with nonconvex sparseness-inducing penalty for image restoration
- Analysis of the equivalence relationship between \(l_{0}\)-minimization and \(l_{p}\)-minimization
- Convergence rates of learning algorithms by random projection
- The stochastic geometry of unconstrained one-bit data compression
- Noisy 1-bit compressive sensing: models and algorithms
- Sampling in the analysis transform domain
- Introducing the counter mode of operation to compressed sensing based encryption
- Sparsity and incoherence in orthogonal matching pursuit
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
- From compression to compressed sensing
- Stable image reconstruction using transformed total variation minimization
- Inverse scale space decomposition
- Phase retrieval from incomplete magnitude information via total variation regularization
- The recovery guarantee for orthogonal matching pursuit method to reconstruct sparse polynomials
- Signal recovery under cumulative coherence
- A two-step iterative algorithm for sparse hyperspectral unmixing via total variation
- Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO
- Consistency of \(\ell_1\) recovery from noisy deterministic measurements
- Comparison of parametric sparse recovery methods for ISAR image formation
- Deterministic sampling of sparse trigonometric polynomials
- Extraction of intrawave signals using the sparse time-frequency representation method
- Stability of the elastic net estimator
- Sparse time-frequency decomposition for multiple signals with same frequencies
- Compressed sensing with structured sparsity and structured acquisition
- Greedy signal space methods for incoherence and beyond
- Compressive sensing for multi-static scattering analysis
- On generating optimal sparse probabilistic Boolean networks with maximum entropy from a positive stationary distribution.
- A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials
- A theoretical result of sparse signal recovery via alternating projection method
- Smoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimization
- Coorbit theory, multi-\(\alpha \)-modulation frames, and the concept of joint sparsity for medical multichannel data analysis
- The local convexity of solving systems of quadratic equations
- Two new lower bounds for the spark of a matrix
- Sparse broadband beamformer design via proximal optimization Techniques
This page was built for publication: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3548002)