A mathematical introduction to compressive sensing
DOI10.1007/978-0-8176-4948-7zbMATH Open1315.94002OpenAlexW143004564MaRDI QIDQ351503FDOQ351503
Authors: Simon Foucart, Holger Rauhut
Publication date: 5 July 2013
Published in: Applied and Numerical Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-0-8176-4948-7
Recommendations
Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to information and communication theory (94-01)
Cited In (only showing first 100 items - show all)
- Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems
- Empirical average-case relation between undersampling and sparsity in X-ray CT
- A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions
- A new generalized shrinkage conjugate gradient method for sparse recovery
- Compressed sensing with sparse binary matrices: instance optimal error guarantees in near-optimal time
- Dimensionality reduction with subgaussian matrices: a unified theory
- Computing and analyzing recoverable supports for sparse reconstruction
- Long gaps between primes
- Randomized numerical linear algebra: Foundations and algorithms
- Guarantees of total variation minimization for signal recovery
- Theoretical frame properties of wave-packet matrices over prime fields
- Sparse recovery under weak moment assumptions
- Generalized sampling and infinite-dimensional compressed sensing
- Sensitivity of low-rank matrix recovery
- On tensor product approximation of analytic functions
- A geometrical stability condition for compressed sensing
- Non-uniform recovery guarantees for binary measurements and infinite-dimensional compressed sensing
- Convolutional neural networks analyzed via convolutional sparse coding
- Tensor theta norms and low rank recovery
- Fast Phase Retrieval from Local Correlation Measurements
- Gaussian approximations in high dimensional estimation
- Semidefinite programming relaxation methods for global optimization problems with sparse polynomials and unbounded semialgebraic feasible sets
- Control of multi-agent systems: results, open problems, and applications
- Breaking the curse for uniform approximation in Hilbert spaces via Monte Carlo methods
- Structure and Optimisation in Computational Harmonic Analysis: On Key Aspects in Sparse Regularisation
- Stable low-rank matrix recovery via null space properties
- Derandomized compressed sensing with nonuniform guarantees for \(\ell_1\) recovery
- Average best \(m\)-term approximation
- Phase retrieval from Gabor measurements
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Hellmann-Feynman connection for the relative Fisher information
- Carl's inequality for quasi-Banach spaces
- A unified approach to uniform signal recovery from nonlinear observations
- Improved bounds for the RIP of subsampled circulant matrices
- A unified framework for linear dimensionality reduction in L1
- Sparse solutions of linear complementarity problems
- Conjugate gradient acceleration of iteratively re-weighted least squares methods
- A class of deterministic sensing matrices and their application in harmonic detection
- Low rank tensor recovery via iterative hard thresholding
- Sparse high-dimensional FFT based on rank-1 lattice sampling
- The uniform sparse FFT with application to PDEs with random coefficients
- Sparse reconstruction with multiple Walsh matrices
- Sparse recovery with integrality constraints
- Deterministic bounds for restricted isometry in compressed sensing matrices
- On maximal relative projection constants
- Analysis \(\ell_1\)-recovery with frames and Gaussian measurements
- Robust sparse phase retrieval made easy
- TV-based reconstruction of periodic functions
- Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods
- Interpolation via weighted \(\ell_{1}\) minimization
- Interactions between compressed sensing random matrices and high dimensional geometry
- 1-bit compressive sensing: reformulation and RRSP-based sign recovery theory
- Regularity properties of non-negative sparsity sets
- Explicit matrices with the restricted isometry property: breaking the square-root bottleneck
- Tensor completion in hierarchical tensor representations
- Sampling theory. Beyond bandlimited systems
- Complexity of linear ill-posed problems in Hilbert space
- Improved recovery guarantees for phase retrieval from coded diffraction patterns
- Low rank matrix recovery from rank one measurements
- Recovery of low-rank matrices based on the rank null space properties
- Quasi-linear compressed sensing
- A survey of compressed sensing
- Testable uniqueness conditions for empirical assessment of undersampling levels in total variation-regularized X-ray CT
- The geometry of off-the-grid compressed sensing
- Compressive sensing and structured random matrices
- Low-rank matrix recovery via rank one tight frame measurements
- Accelerating stochastic collocation methods for partial differential equations with random input data
- Compressed sensing of low-rank plus sparse matrices
- Recovery of sparse integer vectors from linear measurements
- Optimally sparse data representations
- Paved with good intentions: analysis of a randomized block Kaczmarz method
- Sparse linear regression from perturbed data
- New conditions on stable recovery of weighted sparse signals via weighted \(l_1\) minimization
- Approximately normalized iterative hard thresholding for nonlinear compressive sensing
- Sparsest piecewise-linear regression of one-dimensional data
- Enhancing matrix completion using a modified second-order total variation
- Compressed sensing with local structure: uniform recovery guarantees for the sparsity in levels class
- Moving horizon estimation for ARMAX processes with additive output noise
- The recovery of ridge functions on the hypercube suffers from the curse of dimensionality
- A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size
- Splines are universal solutions of linear inverse problems with generalized TV regularization
- A representer theorem for deep neural networks
- Greedy subspace pursuit for joint sparse recovery
- Recovery analysis for weighted mixed \(\ell_2 / \ell_p\) minimization with \(0 < p \leq 1\)
- A deterministic sparse FFT for functions with structured Fourier sparsity
- Sparse approximate reconstruction decomposed by two optimization problems
- Measurement matrix optimization via mutual coherence minimization for compressively sensed signals reconstruction
- Erasure coding for fault-oblivious linear system solvers
- Joint sparse recovery based on variances
- Approximation spaces of deep neural networks
- Proximal Gradient Methods for Machine Learning and Imaging
- A simple proof of the Grünbaum conjecture
- Sparse recovery in probability via \(l_q\)-minimization with Weibull random matrices for \(0 < q\leq 1\)
- Computing the spark: mixed-integer programming for the (vector) matroid girth problem
- Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all
- Spark-level sparsity and the \(\ell_1\) tail minimization
- Column normalization of a random measurement matrix
- A new class of fully discrete sparse Fourier transforms: faster stable implementations with guarantees
- Computing a quantity of interest from observational data
- High-dimensional sparse FFT based on sampling along multiple rank-1 lattices
Uses Software
This page was built for publication: A mathematical introduction to compressive sensing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q351503)