Generalizing CoSaMP to signals from a union of low dimensional linear subspaces
From MaRDI portal
Publication:2175016
DOI10.1016/J.ACHA.2018.11.005zbMATH Open1436.94019arXiv1703.01920OpenAlexW3106136109WikidataQ128709178 ScholiaQ128709178MaRDI QIDQ2175016FDOQ2175016
Publication date: 27 April 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Abstract: The idea that signals reside in a union of low dimensional subspaces subsumes many low dimensional models that have been used extensively in the recent decade in many fields and applications. Until recently, the vast majority of works have studied each one of these models on its own. However, a recent approach suggests providing general theory for low dimensional models using their Gaussian mean width, which serves as a measure for the intrinsic low dimensionality of the data. In this work we use this novel approach to study a generalized version of the popular compressive sampling matching pursuit (CoSaMP) algorithm, and to provide general recovery guarantees for signals from a union of low dimensional linear subspaces, under the assumption that the measurement matrix is Gaussian. We discuss the implications of our results for specific models, and use the generalized algorithm as an inspiration for a new greedy method for signal reconstruction in a combined sparse-synthesis and cosparse-analysis model. We perform experiments that demonstrate the usefulness of the proposed strategy.
Full work available at URL: https://arxiv.org/abs/1703.01920
Cites Work
- ADMiRA: Atomic Decomposition for Minimum Rank Approximation
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Matching pursuits with time-frequency dictionaries
- Exact matrix completion via convex optimization
- Decoding by Linear Programming
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Stable signal recovery from incomplete and inaccurate measurements
- Sparse and Redundant Representations
- From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
- Solving Inverse Problems With Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
- Iterative hard thresholding for compressed sensing
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- A mathematical introduction to compressive sensing
- Adaptive greedy approximations
- Random projections of smooth manifolds
- The cosparse analysis model and algorithms
- The convex geometry of linear inverse problems
- Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA)
- Atomic decomposition by basis pursuit
- Title not available (Why is that?)
- Living on the edge: phase transitions in convex programs with random data
- Signal Space CoSaMP for Sparse Recovery With Redundant Dictionaries
- Subspace Pursuit for Compressive Sensing Signal Reconstruction
- Model-Based Compressive Sensing
- Hard Thresholding Pursuit: An Algorithm for Compressive Sensing
- One-bit compressed sensing with non-Gaussian measurements
- Robust Recovery of Signals From a Structured Union of Subspaces
- Sparse Recovery Algorithms: Sufficient Conditions in Terms of Restricted Isometry Constants
- Orthogonal least squares methods and their application to non-linear system identification
- A Theory for Sampling Signals From a Union of Subspaces
- Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces
- Sampling and Reconstructing Signals From a Union of Linear Subspaces
- Greedy-like algorithms for the cosparse analysis model
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- Estimation in High Dimensions: A Geometric Perspective
- RIP-Based Near-Oracle Performance Guarantees for SP, CoSaMP, and IHT
- Greedy signal space methods for incoherence and beyond
- Performance comparisons of greedy algorithms in compressed sensing
- Sharp Time–Data Tradeoffs for Linear Inverse Problems
- Recipes for Stable Linear Embeddings From Hilbert Spaces to $ {\mathbb {R}}^{m}$
Cited In (4)
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Convergence analysis on the alternating direction method of multipliers for the cosparse optimization problem
- Low-rank signal subspace: parameterization, projection and signal estimation
- Convergence on thresholding-based algorithms for dictionary-sparse recovery
Uses Software
This page was built for publication: Generalizing CoSaMP to signals from a union of low dimensional linear subspaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2175016)