Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems
From MaRDI portal
Publication:5347292
Abstract: We study identifiability for bilinear inverse problems under sparsity and subspace constraints. We show that, up to a global scaling ambiguity, almost all such maps are injective on the set of pairs of sparse vectors if the number of measurements exceeds , where and denote the sparsity of the two input vectors, and injective on the set of pairs of vectors lying in known subspaces of dimensions and if . We also prove that both these bounds are tight in the sense that one cannot have injectivity for a smaller number of measurements. Our proof technique draws from algebraic geometry. As an application we derive optimal identifiability conditions for the deconvolution problem, thus improving on recent work of Li et al. [1].
Recommendations
- Identifiability of the deconvolution problem
- Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization
- The convex geometry of linear inverse problems
- Blind inverse problems with isolated spikes
- BranchHull: convex bilinear inversion from the entrywise product of signals with known signs
Cites work
- scientific article; zbMATH DE number 3572315 (Why is no real title available?)
- A mathematical introduction to compressive sensing
- An algebraic characterization of injectivity in phase retrieval
- Asymptotic theory of finite dimensional normed spaces. With an appendix by M. Gromov: Isoperimetric inequalities in Riemannian manifolds
- Blind Deconvolution Meets Blind Demixing: Algorithms and Performance Bounds
- Blind Deconvolution Using Convex Programming
- Compressed sensing
- Constrained quantum tomography of semi-algebraic sets with applications to low-rank matrix recovery
- Exact matrix completion via convex optimization
- Identifiability and Stability in Blind Deconvolution Under Minimal Assumptions
- Identifiability in Blind Deconvolution With Subspace or Sparsity Constraints
- Improved recovery guarantees for phase retrieval from coded diffraction patterns
- Near-Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices Via Sparse Power Factorization
- On signal reconstruction without phase
- On sparse reconstruction from Fourier and Gaussian measurements
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- Phase retrieval from coded diffraction patterns
- Phase retrieval from power spectra of masked signals
- Phase retrieval: stability and recovery guarantees
- Phaselift: exact and stable signal recovery from magnitude measurements via convex programming
- Recent developments in blind channel equalization: From cyclostationarity to subspaces
- Self-calibration and biconvex compressive sensing
- Stable signal recovery from incomplete and inaccurate measurements
- Structured random measurements in signal processing
- Suprema of chaos processes and the restricted isometry property
- The convex geometry of linear inverse problems
- The red book of varieties and schemes. Includes the Michigan lectures (1974) on ``Curves and their Jacobians.
- Uniqueness conditions for low-rank matrix recovery
Cited in
(14)- Almost everywhere generalized phase retrieval
- Sparse power factorization: balancing peakiness and sample complexity
- Exact Recovery of Multichannel Sparse Blind Deconvolution via Gradient Descent
- Efficient Identification of Butterfly Sparse Matrix Factorizations
- Almost everywhere injectivity conditions for the matrix recovery problem
- Information theory and recovery algorithms for data fusion in Earth observation
- Riemannian thresholding methods for row-sparse and low-rank matrix recovery
- Geometry and symmetry in short-and-sparse deconvolution
- Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems: an empirical Bayesian approach. II: Theoretical analysis
- Spectral Methods for Passive Imaging: Nonasymptotic Performance and Robustness
- Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization
- Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution Under Random Designs
- Proof methods for robust low-rank matrix recovery
- Self-calibration and bilinear inverse problems via linear least squares
This page was built for publication: Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5347292)