A mathematical introduction to compressive sensing

From MaRDI portal
Publication:351503

DOI10.1007/978-0-8176-4948-7zbMath1315.94002OpenAlexW143004564MaRDI QIDQ351503

Simon Foucart, Holger Rauhut

Publication date: 5 July 2013

Published in: Applied and Numerical Harmonic Analysis (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/978-0-8176-4948-7



Related Items

A Simple Tool for Bounding the Deviation of Random Matrices on Geometric Sets, On Multiplier Processes Under Weak Moment Assumptions, Johnson–Lindenstrauss Embeddings with Kronecker Structure, Efficient Identification of Butterfly Sparse Matrix Factorizations, Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem, The numerics of phase retrieval, Randomized numerical linear algebra: Foundations and algorithms, A survey on compressive sensing: classical results and recent advancements, Unnamed Item, Stable and robust $\ell_p$-constrained compressive sensing recovery via robust width property, Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems, Lower bounds on the low-distortion embedding dimension of submanifolds of \(\mathbb{R}^n\), Improved RIP-based bounds for guaranteed performance of two compressed sensing algorithms, Algebraic compressed sensing, Iterative hard thresholding for low CP-rank tensor models, Recovery of low-rank matrices based on the rank null space properties, A general null space property for sparse principal component analysis, Control of multi-agent systems: results, open problems, and applications, Nonlinear approximation in bounded orthonormal product bases, Embracing off-the-grid samples, HARFE: hard-ridge random feature expansion, Robust sensing of low-rank matrices with non-orthogonal sparse decomposition, Random sampling over locally compact abelian groups and inversion of the Radon transform, The null space property of the weighted ℓr − ℓ1 minimization, Model selection via reweighted partial sparse recovery, Block sparse signal recovery via minimizing the block \(q\)-ratio sparsity, Short Communication: Localized Adversarial Artifacts for Compressed Sensing MRI, Sparse Bayesian inference with regularized Gaussian distributions *, Sampling numbers of smoothness classes via \(\ell^1\)-minimization, Sparse reconstruction via the mixture optimization model with iterative support estimate, Error guarantees for least squares approximation with noisy samples in domain adaptation, Random points are good for universal discretization, A theory of capacity and sparse neural encoding, Sparse convex optimization toolkit: a mixed-integer framework, Approximation Analysis of Convolutional Neural Networks, Analysis of sparse recovery for Legendre expansions using envelope bound, Doubly iteratively reweighted algorithm for constrained compressed sensing models, Solution uniqueness of convex piecewise affine functions based optimization with applications to constrained 1 minimization, A Generalization of Wirtinger Flow for Exact Interferometric Inversion, Importance sampling in signal processing applications, Recovery of sparse integer vectors from linear measurements, One-bit sensing, discrepancy and Stolarsky's principle, Unnamed Item, PhaseMax: Stable guarantees from noisy sub-Gaussian measurements, Robust recovery of signals with partially known support information using weighted BPDN, An unbiased approach to compressed sensing, TV-based reconstruction of periodic functions, Compressive time-of-flight 3D imaging using block-structured sensing matrices, Learning the invisible: a hybrid deep learning-shearlet framework for limited angle computed tomography, Stability analysis of a class of sparse optimization problems, Extracting Structured Dynamical Systems Using Sparse Optimization With Very Few Samples, A Compressive Spectral Collocation Method for the Diffusion Equation Under the Restricted Isometry Property, Error Localization of Best $L_{1}$ Polynomial Approximants, $l_p$ Regularization for Ensemble Kalman Inversion, A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials, Persistent homology for low-complexity models, The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Variance-stabilization-based compressive inversion under Poisson or Poisson–Gaussian noise with analytical bounds, Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems, Sparsity and level set regularization for near-field electromagnetic imaging in 3D, A mixed 1 regularization approach for sparse simultaneous approximation of parameterized PDEs, Weak Stability of 1-Minimization Methods in Sparse Data Reconstruction, Jointly low-rank and bisparse recovery: Questions and partial answers, An improved bound of cumulative coherence for signal recovery, Sparse recovery from extreme eigenvalues deviation inequalities, Endpoint Results for Fourier Integral Operators on Noncompact Symmetric Spaces, Weak-Type Estimates for the Metaplectic Representation Restricted to the Shearing and Dilation Subgroup of $$SL(2,\mathbb {R})$$, Spectral Methods for Passive Imaging: Nonasymptotic Performance and Robustness, Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation, Solving inverse problems using data-driven models, Adaptive Approximation by Optimal Weighted Least-Squares Methods, Nonconvex Optimization for 3-Dimensional Point Source Localization Using a Rotating Point Spread Function, Characterization of ℓ1 minimizer in one-bit compressed sensing, Randomized Projection Methods for Linear Systems with Arbitrarily Large Sparse Corruptions, On the Role of Total Variation in Compressed Sensing, Quasi-linear Compressed Sensing, Globally Optimizing Small Codes in Real Projective Spaces, Stochastic Collocation Algorithms Using $l_1$-Minimization for Bayesian Solution of Inverse Problems, Banach spaces for which the space of operators has 2𝔠closed ideals, Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares, New Restricted Isometry Property Analysis for $\ell_1-\ell_2$ Minimization Methods, Multiscale Factorization of the Wave Equation with Application to Compressed Sensing Photoacoustic Tomography, Compressed Sensing Photoacoustic Tomography Reduces to Compressed Sensing for Undersampled Fourier Measurements, Improved Recovery Guarantees and Sampling Strategies for TV Minimization in Compressive Imaging, Compressive Sampling for Energy Spectrum Estimation of Turbulent Flows, A locally sequential refinement of the growth dynamics identification, Compressive Imaging Through Optical Fiber with Partial Speckle Scanning, Recovering Wavelet Coefficients from Binary Samples Using Fast Transforms, Orthogonal Trace-Sum Maximization: Tightness of the Semidefinite Relaxation and Guarantee of Locally Optimal Solutions, On the Absence of Uniform Recovery in Many Real-World Applications of Compressed Sensing and the Restricted Isometry Property and Nullspace Property in Levels, Fast state tomography with optimal error bounds, Nonuniform recovery of fusion frame structured sparse signals, Large deviations for uniform projections of $p$-radial distributions on $\ell_p^n$-balls, Well-Posed Bayesian Inverse Problems: Priors with Exponential Tails, Nonlinear approximation spaces for inverse problems, An analysis of noise folding for low-rank matrix recovery, Efficiently approximating the probability of deadline misses in real-time systems, A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions, Approximation, Gelfand, and Kolmogorov numbers of Schatten class embeddings, Derandomized compressed sensing with nonuniform guarantees for \(\ell_1\) recovery, On the randomised stability constant for inverse problems, Sparse solutions to an underdetermined system of linear equations via penalized Huber loss, A new family of polyphase sequences with low correlation, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Spectral estimation from simulations via sketching, Sampling discretization and related problems, Degrees of freedom for off-the-grid sparse estimation, Post-model-selection inference in linear regression models: an integrated review, A modified orthogonal matching pursuit for construction of sparse probabilistic Boolean networks, \(k\)-point semidefinite programming bounds for equiangular lines, Compressive sensing of high betweenness centrality nodes in networks, Weighted \(p\)-radial distributions on Euclidean and matrix \(p\)-balls with applications to large deviations, The restricted isometry property of block diagonal matrices for group-sparse signal recovery, The Hanson-Wright inequality for random tensors, Learning mean-field equations from particle data using WSINDy, The springback penalty for robust signal recovery, On the robustness of noise-blind low-rank recovery from rank-one measurements, On the value of the fifth maximal projection constant, CLOT norm minimization for continuous hands-off control, Adaptive multi-penalty regularization based on a generalized Lasso path, Generalizing CoSaMP to signals from a union of low dimensional linear subspaces, Robust and stable region-of-interest tomographic reconstruction using a robust width prior, On the volume of unit balls of finite-dimensional Lorentz spaces, Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm, Integer sampling matrices with small entries ensuring vector recovery, Sparse recovery with integrality constraints, Block-sparse recovery of semidefinite systems and generalized null space conditions, Sparse linear regression from perturbed data, Recovery guarantees for polynomial coefficients from weakly dependent data with outliers, Bayesian approach with extended support estimation for sparse linear regression, A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations, Algebraic cubature on polygonal elements with a circular edge, Hanson-Wright inequality in Hilbert spaces with application to \(K\)-means clustering for non-Euclidean data, Compressive isogeometric analysis, Inverse potential problems for divergence of measures with total variation regularization, Entropy numbers of finite-dimensional embeddings, Adaptive coded aperture design for compressive computed tomography, Signal separation under coherent dictionaries and \(\ell_p\)-bounded noise, Complex phase retrieval from subgaussian measurements, Phaseless compressive sensing using partial support information, Uniform recovery in infinite-dimensional compressed sensing and applications to structured binary sampling, The recovery of ridge functions on the hypercube suffers from the curse of dimensionality, A deterministic sparse FFT for functions with structured Fourier sparsity, Sparse approximate reconstruction decomposed by two optimization problems, Measurement matrix optimization via mutual coherence minimization for compressively sensed signals reconstruction, Compressed solving: a numerical approximation technique for elliptic PDEs based on compressed sensing, Single snapshot DOA estimation by minimizing the fraction function in sparse recovery, Learning general sparse additive models from point queries in high dimensions, An algebraic perspective on integer sparse recovery, Computing the spark: mixed-integer programming for the (vector) matroid girth problem, Complexity and applications of the homotopy principle for uniformly constrained sparse minimization, Stochastic greedy algorithms for multiple measurement vectors, Sparse random matrices have simple spectrum, Sparse harmonic transforms: a new class of sublinear-time algorithms for learning functions of many variables, Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs, Low-rank matrix recovery via regularized nuclear norm minimization, Random sampling and reconstruction of concentrated signals in a reproducing kernel space, Sparse harmonic transforms. II: Best \(s\)-term approximation guarantees for bounded orthonormal product bases in sublinear-time, Data-driven algorithm selection and tuning in optimization and signal processing, A Laplacian approach to \(\ell_1\)-norm minimization, On the optimal constants in the two-sided Stechkin inequalities, Donoho-Logan large sieve principles for modulation and polyanalytic Fock spaces, Tensor theta norms and low rank recovery, Gelfand numbers of embeddings of Schatten classes, Sparse recovery using the discrete cosine transform, Dual-density-based reweighted \(\ell_1\)-algorithms for a class of \(\ell_0\)-minimization problems, Nonuniqueness of solutions of a class of \(\ell_0\)-minimization problems, Optimal fast Johnson-Lindenstrauss embeddings for large data sets, Tight bounds on the mutual coherence of sensing matrices for Wigner d-functions on regular grids, Dimensionality reduction for \(k\)-distance applied to persistent homology, Image multiplicative denoising using adaptive Euler's elastica as the regularization, A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size, Robust non-parametric regression via incoherent subspace projections, A simple recovery framework for signals with time-varying sparse support, Sparse Fourier transforms on rank-1 lattices for the rapid and low-memory approximation of functions of many variables, Sparsest piecewise-linear regression of one-dimensional data, Recovering sparse networks: basis adaptation and stability under extensions, Robust sparse recovery via a novel convex model, Partial gradient optimal thresholding algorithms for a class of sparse optimization problems, Sufficient conditions for the uniqueness of solution of the weighted norm minimization problem, The uniform sparse FFT with application to PDEs with random coefficients, Sensitivity of low-rank matrix recovery, Generalization bounds for sparse random feature expansions, The sparsity of LASSO-type minimizers, Hierarchical compressed sensing, Proof methods for robust low-rank matrix recovery, New challenges in covariance estimation: multiple structures and coarse quantization, Sparse deterministic and stochastic channels: identification of spreading functions and covariances, Analysis of sparse recovery algorithms via the replica method, Recovery under side constraints, Compressive sensing and neural networks from a statistical learning perspective, Compressed sensing in the spherical near-field to far-field transformation, Single-pass randomized QLP decomposition for low-rank approximation, A note on the invertibility of the Gabor frame operator on certain modulation spaces, Weighted \(\ell_p\) (\(0<p\le 1\)) minimization with non-uniform weights for sparse recovery under partial support information, Approximation spaces of deep neural networks, Robust and resource-efficient identification of two hidden layer neural networks, Hierarchical isometry properties of hierarchical measurements, Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data, Refined analysis of sparse MIMO radar, Average-case complexity without the black swans, Global sensitivity analysis based on high-dimensional sparse surrogate construction, Structure dependent sampling in compressed sensing: theoretical guarantees for tight frames, One condition for solution uniqueness and robustness of both \(\ell_1\)-synthesis and \(\ell_1\)-analysis minimizations, Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares, Computing a quantity of interest from observational data, Book Review: A mathematical introduction to compressive sensing, Uncertainty Propagation; Intrusive Kinetic Formulations of Scalar Conservation Laws, Accelerating Stochastic Collocation Methods for Partial Differential Equations with Random Input Data, Recovery analysis for weighted \(\ell_{1}\)-minimization using the null space property, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, Explicit universal sampling sets in finite vector spaces, Low rank tensor recovery via iterative hard thresholding, A theoretical result of sparse signal recovery via alternating projection method, A new bound on the block restricted isometry constant in compressed sensing, Compressed sensing for finite-valued signals, Non-uniform recovery guarantees for binary measurements and infinite-dimensional compressed sensing, A provably convergent scheme for compressive sensing under random generative priors, Entropy numbers of embeddings of Schatten classes, Infinite dimensional compressed sensing from anisotropic measurements and applications to inverse problems in PDE, High-dimensional sparse FFT based on sampling along multiple rank-1 lattices, When does OMP achieve exact recovery with continuous dictionaries?, \(\ell^1\)-analysis minimization and generalized (co-)sparsity: when does recovery succeed?, A novel regularization based on the error function for sparse recovery, A multi-parameter regularization model for deblurring images corrupted by impulsive noise, A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing, Entropy numbers of finite dimensional mixed-norm balls and function space embeddings with small mixed smoothness, Compressive statistical learning with random feature moments, Statistical learning guarantees for compressive clustering and compressive mixture modeling, Convex optimization in sums of Banach spaces, A necessary and sufficient condition for sparse vector recovery via \(\ell_1-\ell_2\) minimization, Sparse signal recovery from phaseless measurements via hard thresholding pursuit, Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing, On randomized trace estimates for indefinite matrices with an application to determinants, Compressed sensing with local structure: uniform recovery guarantees for the sparsity in levels class, Generic error bounds for the generalized Lasso with sub-exponential data, A Generalized Class of Hard Thresholding Algorithms for Sparse Signal Recovery, Newton-type optimal thresholding algorithms for sparse optimization problems, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, Sparse Hanson-Wright inequalities for subgaussian quadratic forms, A new class of fully discrete sparse Fourier transforms: faster stable implementations with guarantees, Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations, Interior reconstruction in tomography via prior support constrained compressed sensing, Stability of lq-analysis based dual frame with Weibull matrices for 0 < q ≤ 1, Correcting for unknown errors in sparse high-dimensional function approximation, Misspecified nonconvex statistical optimization for sparse phase retrieval, The geometry of off-the-grid compressed sensing, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, Compressed sensing of low-rank plus sparse matrices, Covering point-sets with parallel hyperplanes and sparse signal recovery, Erasure Coding for Fault-Oblivious Linear System Solvers, Applied harmonic analysis and data science. Abstracts from the workshop held November 28 -- December 4, 2021 (hybrid meeting), Stochastic Collocation Methods via $\ell_1$ Minimization Using Randomized Quadratures, Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing, BREAKING THE COHERENCE BARRIER: A NEW THEORY FOR COMPRESSED SENSING, Uniqueness in nuclear norm minimization: flatness of the nuclear norm sphere and simultaneous polarization, Sparsity and level set regularization for diffuse optical tomography using a transport model in 2D, A compressed sensing approach to interpolation of fractional Brownian trajectories for a single particle tracking experiment, Paved with good intentions: analysis of a randomized block Kaczmarz method, A simple proof of the Grünbaum conjecture, On the sparsity of Lasso minimizers in sparse data recovery, A unifying representer theorem for inverse problems and machine learning, Discrete optimization methods for group model selection in compressed sensing, Thin-shell concentration for random vectors in Orlicz balls via moderate deviations and Gibbs measures, Worst-case recovery guarantees for least squares approximation using random samples, Compressed sensing with sparse binary matrices: instance optimal error guarantees in near-optimal time, On the computation of sparse solutions to the controllability problem for discrete-time linear systems, Moment inequalities for matrix-valued U-statistics of order 2, An extremal problem for integer sparse recovery, Relevant sampling of band-limited functions, Convex Cardinal Shape Composition, Estimation in High Dimensions: A Geometric Perspective, Low Complexity Regularization of Linear Inverse Problems, Noise-Shaping Quantization Methods for Frame-Based and Compressive Sampling Systems, Fast and RIP-optimal transforms, Multi-kernel unmixing and super-resolution using the modified matrix pencil method, On sparse beamformer design with reverberation, Erasure recovery matrices for encoder protection, Approximate support recovery of atomic line spectral estimation: a tale of resolution and precision, Sampling schemes and recovery algorithms for functions of few coordinate variables, The sparsest solution of the union of finite polytopes via its nonconvex relaxation, Compressive Hermite interpolation: sparse, high-dimensional approximation from gradient-augmented measurements, New conditions on stable recovery of weighted sparse signals via weighted \(l_1\) minimization, Reconstruction of sparse-view tomography via preconditioned Radon sensing matrix, Sparse reconstruction with multiple Walsh matrices, A difference-of-convex functions approach for sparse PDE optimal control problems with nonconvex costs, Truncated sparse approximation property and truncated \(q\)-norm minimization, Lattices from tight frames and vertex transitive graphs, Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization, Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit, Quantized compressed sensing for random circulant matrices, Necessary and Sufficient Conditions for Noiseless Sparse Recovery via Convex Quadratic Splines, Preconditioning for orthogonal matching pursuit with noisy and random measurements: the Gaussian case, A fast algorithm for the semi-definite relaxation of the state estimation problem in power grids, Concave Mirsky Inequality and Low-Rank Recovery, On some aspects of approximation of ridge functions, Finding sparse solutions of systems of polynomial equations via group-sparsity optimization, Combined first and second order variational approaches for image processing, Computing and analyzing recoverable supports for sparse reconstruction, Minimal numerical differentiation formulas, Sketched learning for image denoising, Rapid, large-scale, and effective detection of COVID-19 via non-adaptive testing, On tensor product approximation of analytic functions, Iteratively weighted thresholding homotopy method for the sparse solution of underdetermined linear equations, A geometrical stability condition for compressed sensing, Gaussian approximations in high dimensional estimation, Signal recovery under cumulative coherence, Minimizers of sparsity regularized Huber loss function, Semidefinite programming relaxation methods for global optimization problems with sparse polynomials and unbounded semialgebraic feasible sets, On the post selection inference constant under restricted isometry properties, Sparse recovery from inaccurate saturated measurements, Phase retrieval from Gabor measurements, Improved bounds for sparse recovery from subsampled random convolutions, A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron, Hellmann-Feynman connection for the relative Fisher information, Carl's inequality for quasi-Banach spaces, A unified framework for linear dimensionality reduction in L1, Sparse solutions of linear complementarity problems, A primal-dual homotopy algorithm for \(\ell _{1}\)-minimization with \(\ell _{\infty }\)-constraints, Global optimization for low-dimensional switching linear regression and bounded-error estimation, Conjugate gradient acceleration of iteratively re-weighted least squares methods, A class of deterministic sensing matrices and their application in harmonic detection, Sparse high-dimensional FFT based on rank-1 lattice sampling, Deterministic bounds for restricted isometry in compressed sensing matrices, On maximal relative projection constants, Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods, 1-bit compressive sensing: reformulation and RRSP-based sign recovery theory, Regularity properties of non-negative sparsity sets, On Dvoretzky's theorem for subspaces of \(L_p\), Infinite-dimensional compressed sensing and function interpolation, Complexity of linear ill-posed problems in Hilbert space, Improved recovery guarantees for phase retrieval from coded diffraction patterns, Low rank matrix recovery from rank one measurements, Robust sparse phase retrieval made easy, Compressed sensing for real measurements of quaternion signals, A novel probabilistic approach for vehicle position prediction in free, partial, and full GPS outages, On the uniqueness of the sparse signals reconstruction based on the missing samples variation analysis, Expander \(\ell_0\)-decoding, PROMP: a sparse recovery approach to lattice-valued signals, Nonlinear approximation with nonstationary Gabor frames, An LP empirical quadrature procedure for parametrized functions, Polynomials with bounds and numerical approximation, Sparse disjointed recovery from noninflating measurements, Estimation of functionals of sparse covariance matrices, Analysis of the equivalence relationship between \(l_{0}\)-minimization and \(l_{p}\)-minimization, Random sampling of bandlimited signals on graphs, Stability of the elastic net estimator, Density theorems for nonuniform sampling of bandlimited functions using derivatives or bunched measurements, A hierarchical framework for recovery in compressive sensing, Sampling in the analysis transform domain, Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization, Analysis \(\ell_1\)-recovery with frames and Gaussian measurements, A Rice method proof of the null-space property over the Grassmannian, Interpolation via weighted \(\ell_{1}\) minimization, Average best \(m\)-term approximation, Compressed sensing with structured sparsity and structured acquisition, An analysis of the SPARSEVA estimate for the finite sample data case, Accelerated first-order methods for hyperbolic programming, Enhancing matrix completion using a modified second-order total variation, Moving horizon estimation for ARMAX processes with additive output noise, Signal recovery under mutual incoherence property and oracle inequalities, Near-optimal encoding for sigma-delta quantization of finite frame expansions, Low-rank matrix recovery via rank one tight frame measurements, Recovery analysis for weighted mixed \(\ell_2 / \ell_p\) minimization with \(0 < p \leq 1\), Greedy subspace pursuit for joint sparse recovery, Discrete uncertainty principles and sparse signal processing, A new generalized shrinkage conjugate gradient method for sparse recovery, On the null space property of \(l_q\)-minimization for \(0 < q \leq 1\) in compressed sensing, Noise folding in completely perturbed compressed sensing, Toward a unified theory of sparse dimensionality reduction in Euclidean space, On uniqueness guarantees of solution in convex regularized linear inverse problems, Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations, Sparse recovery in probability via \(l_q\)-minimization with Weibull random matrices for \(0 < q\leq 1\), Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all, Spark-level sparsity and the \(\ell_1\) tail minimization, Column normalization of a random measurement matrix, Nonlinear frames and sparse reconstructions in Banach spaces, Orthogonal matching pursuit under the restricted isometry property, Generalized sampling and infinite-dimensional compressed sensing, Dimensionality reduction with subgaussian matrices: a unified theory, Sparse recovery under weak moment assumptions, Sparse signal recovery using a new class of random matrices, Rapidly computing sparse Legendre expansions via sparse Fourier transforms, Compressed sensing for quaternionic signals, Inpainting via high-dimensional universal shearlet systems, Solution of linear ill-posed problems using random dictionaries, Covering compact metric spaces greedily, Breaking the curse for uniform approximation in Hilbert spaces via Monte Carlo methods, Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness, Approximately normalized iterative hard thresholding for nonlinear compressive sensing, Compressive sensing in signal processing: algorithms and transform domain formulations, Norm-minimized scattering data from intensity spectra, Efficient dictionary learning with sparseness-enforcing projections, Uniform recovery of fusion frame structured sparse signals, Hard thresholding pursuit algorithms: number of iterations, Preserving injectivity under subgaussian mappings and its application to compressed sensing, Two-sided space-time \(L^1\) polynomial approximation of hypographs within polynomial optimal control, Empirical average-case relation between undersampling and sparsity in X-ray CT, Compressive Learning for Patch-Based Image Denoising, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, On the uniqueness of solutions for the basis pursuit in the continuum, When Randomness Helps in Undersampling, Pooled Testing and Its Applications in the COVID-19 Pandemic, A General Framework of Rotational Sparse Approximation in Uncertainty Quantification, Estimation of block sparsity in compressive sensing, A Novel Compressed Sensing Scheme for Photoacoustic Tomography, Compressive Sensing with Redundant Dictionaries and Structured Measurements, ORKA: Object reconstruction using a K-approximation graph, Testable uniqueness conditions for empirical assessment of undersampling levels in total variation-regularized X-ray CT, Convergence rates for the joint solution of inverse problems with compressed sensing data, Fast Phase Retrieval from Local Correlation Measurements, Self-calibration and biconvex compressive sensing, On block accelerations of quantile randomized Kaczmarz for corrupted systems of linear equations, A Survey of Compressed Sensing, The Quest for Optimal Sampling: Computationally Efficient, Structure-Exploiting Measurements for Compressed Sensing, Cosparsity in Compressed Sensing, Explicit Matrices with the Restricted Isometry Property: Breaking the Square-Root Bottleneck, Tensor Completion in Hierarchical Tensor Representations, Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection, On the geometry of polytopes generated by heavy-tailed random vectors, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, An Efficient Algorithm for Video Superresolution Based on a Sequential Model, Convergence of Halpern’s Iteration Method with Applications in Optimization, Harmonic Analysis and Random Schrödinger Operators, Optimally Sparse Data Representations, Shearlet-based regularization in statistical inverse learning with an application to x-ray tomography, A Measure Concentration Effect for Matrices of High, Higher, and Even Higher Dimension, Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations, Minimization of $L_1$ Over $L_2$ for Sparse Signal Recovery with Convergence Guarantee, Compressive Imaging and Characterization of Sparse Light Deflection Maps, Extracting Sparse High-Dimensional Dynamics from Limited Data, The smoothing objective penalty function method for two-cardinality sparse constrained optimization problems, Unnamed Item, Gaussian random fields: with and without covariances, Sparsest representations and approximations of an underdetermined linear system, A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: application to M/EEG source imaging, Random Sampling and Reconstruction of Sparse Time- and Band-Limited Signals, Overparameterization and Generalization Error: Weighted Trigonometric Interpolation, Stable Image Reconstruction Using Transformed Total Variation Minimization, The Recovery Guarantee for Orthogonal Matching Pursuit Method to Reconstruct Sparse Polynomials, Improved bounds for the RIP of Subsampled Circulant matrices, A theoretical study of COmpRessed SolvING for advection-diffusion-reaction problems, Robust recovery of a kind of weighted l1-minimization without noise level, Long gaps between primes, TWO NEW LOWER BOUNDS FOR THE SPARK OF A MATRIX, Splines Are Universal Solutions of Linear Inverse Problems with Generalized TV Regularization, Robust analysis ℓ1-recovery from Gaussian measurements and total variation minimization, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Guarantees of total variation minimization for signal recovery, Sparse recovery from saturated measurements, Dimensionality-reduced subspace clustering, Time for dithering: fast and quantized random embeddings via the restricted isometry property, A null-space-based weightedl1minimization approach to compressed sensing, Theoretical frame properties of wave-packet matrices over prime fields, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets, Stable low-rank matrix recovery via null space properties, Flavors of Compressive Sensing, Bounding the restricted isometry constants for a tight frame, Compressed Sensing with Sparse Corruptions: Fault-Tolerant Sparse Collocation Approximations, Tightness of a New and Enhanced Semidefinite Relaxation for MIMO Detection, An Introduction to Compressed Sensing, Quantized Compressed Sensing: A Survey, On Reconstructing Functions from Binary Measurements, Reconstruction Methods in THz Single-Pixel Imaging, Well-Posed Bayesian Inverse Problems with Infinitely Divisible and Heavy-Tailed Prior Measures, Sparse approximation based on a random overcomplete basis, Compressed Sensing with Nonlinear Fourier Atoms, The homotopy method revisited: Computing solution paths of $\ell _1$-regularized problems, Unnamed Item, Joint Sparse Recovery Based on Variances, A Tight Bound of Hard Thresholding, Unnamed Item, Unnamed Item, Implementing the Alternating Direction Method of Multipliers for Big Datasets: A Case Study of Least Absolute Shrinkage and Selection Operator, Fusion frames and distributed sparsity, Self-Calibration and Bilinear Inverse Problems via Linear Least Squares, Least Sparsity of $p$-Norm Based Optimization Problems with $p>1$, Unnamed Item, Spherical Designs and Nonconvex Minimization for Recovery of Sparse Signals on the Sphere, Inverse potential problems in divergence form for measures in the plane, Structure and Optimisation in Computational Harmonic Analysis: On Key Aspects in Sparse Regularisation, Model selection for hybrid dynamical systems via sparse regression, Computational approaches to non-convex, sparsity-inducing multi-penalty regularization, Binary sparse signal recovery with binary matching pursuit *, One-bit compressed sensing via p (0<p <1)-minimization method, Controllability of Network Opinion in Erdös--Rényi Graphs Using Sparse Control Inputs, Unnamed Item, Unnamed Item, Unnamed Item, Perturbation analysis of low-rank matrix stable recovery, Unnamed Item, The Dantzig selector: recovery of signal via ℓ 1 − αℓ 2 minimization, A new sufficient condition for sparse vector recovery via ℓ1 − ℓ2 local minimization, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Extended Dai-Yuan conjugate gradient strategy for large-scale unconstrained optimization with applications to compressive sensing, Compressive Sensing for Cut Improvement and Local Clustering, Proximal Gradient Methods for Machine Learning and Imaging, Regularization: From Inverse Problems to Large-Scale Machine Learning, A unified approach to uniform signal recovery from nonlinear observations, Robust one-bit compressed sensing with partial circulant matrices, Optimal Reaction Coordinates: Variational Characterization and Sparse Computation, Hay from the haystack: explicit examples of exponential quantum circuit complexity, Random sections of \(\ell_p\)-ellipsoids, optimal recovery and Gelfand numbers of diagonal operators, Optimal Parseval frames: total coherence and total volume, A wonderful triangle in compressed sensing, Norms of structured random matrices, A Variable Density Sampling Scheme for Compressive Fourier Transform Interferometry, Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems, Heavy-ball-based hard thresholding algorithms for sparse signal recovery, Time-frequency analysis on flat tori and Gabor frames in finite dimensions, Structured model selection via ℓ1−ℓ2 optimization, Sampling rates for \(\ell^1\)-synthesis, \(\boldsymbol{L_1-\beta L_q}\) Minimization for Signal and Image Recovery, LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing, On fast Johnson-Lindenstrauss embeddings of compact submanifolds of \(\mathbb{R}^N\) with boundary, Optimal approximation of infinite-dimensional holomorphic functions, Optimal learning, Nonconvex \(\ell_p-\alpha\ell_q\) minimization method and \(p\)-RIP condition for stable recovery of approximately \(k\)-sparse signals, Riemannian thresholding methods for row-sparse and low-rank matrix recovery, DCACO: an algorithm for designing incoherent redundant matrices, Just least squares: binary compressive sampling with low generative intrinsic dimension, Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery, On sparsity‐inducing methods in system identification and state estimation, A tight bound of modified iterative hard thresholding algorithm for compressed sensing., \(k\)-sparse vector recovery via truncated \(\ell_1 -\ell_2\) local minimization, Scaled proximal gradient methods for sparse optimization problems, On the determination of Lagrange multipliers for a weighted Lasso problem using geometric and convex analysis techniques, Stable Recovery of Sparsely Corrupted Signals Through Justice Pursuit De-Noising, Sample Size Estimates for Risk-Neutral Semilinear PDE-Constrained Optimization, Sparse and simple structure estimation via prenet penalization, Bayes in action in deep learning and dictionary learning


Uses Software