A provably convergent scheme for compressive sensing under random generative priors
From MaRDI portal
Publication:2658735
Abstract: Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors' knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.
Recommendations
- Deep Generative Models and Inverse Problems
- Compressive sensing and neural networks from a statistical learning perspective
- An efficient algorithm for compression-based compressed sensing
- Just least squares: binary compressive sampling with low generative intrinsic dimension
- Rate-optimal denoising with deep neural networks
Cites work
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A mathematical introduction to compressive sensing
- Compressed sensing: how sharp is the restricted isometry property?
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization
Cited in
(10)- Compressive phase retrieval: Optimal sample complexity with deep generative priors
- Deep Generative Models and Inverse Problems
- Just least squares: binary compressive sampling with low generative intrinsic dimension
- Multi-layer state evolution under random convolutional design
- scientific article; zbMATH DE number 6999917 (Why is no real title available?)
- Learning finite-dimensional coding schemes with nonlinear reconstruction maps
- Compressive sensing and neural networks from a statistical learning perspective
- Consistent approximations in composite optimization
- Solving Inverse Problems by Joint Posterior Maximization with Autoencoding Prior
- Rate-optimal denoising with deep neural networks
This page was built for publication: A provably convergent scheme for compressive sensing under random generative priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2658735)