Compressive sensing and neural networks from a statistical learning perspective
From MaRDI portal
Publication:2106482
Abstract: Various iterative reconstruction algorithms for inverse problems can be unfolded as neural networks. Empirically, this approach has often led to improved results, but theoretical guarantees are still scarce. While some progress on generalization properties of neural networks have been made, great challenges remain. In this chapter, we discuss and combine these topics to present a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements. The hypothesis class considered is inspired by the classical iterative soft-thresholding algorithm (ISTA). The neural networks in this class are obtained by unfolding iterations of ISTA and learning some of the weights. Based on training samples, we aim at learning the optimal network parameters via empirical risk minimization and thereby the optimal network that reconstructs signals from their compressive linear measurements. In particular, we may learn a sparsity basis that is shared by all of the iterations/layers and thereby obtain a new approach for dictionary learning. For this class of networks, we present a generalization bound, which is based on bounding the Rademacher complexity of hypothesis classes consisting of such deep networks via Dudley's integral. Remarkably, under realistic conditions, the generalization error scales only logarithmically in the number of layers, and at most linear in number of measurements.
Recommendations
- NETT: solving inverse problems with deep neural networks
- A provably convergent scheme for compressive sensing under random generative priors
- Tighter guarantees for the compressive multi-layer perceptron
- Big in Japan: regularizing networks for solving inverse problems
- Solving ill-posed inverse problems using iterative deep neural networks
Cites work
- scientific article; zbMATH DE number 1391397 (Why is no real title available?)
- 10.1162/153244303321897690
- A Vector-Contraction Inequality for Rademacher Complexities
- A mathematical introduction to compressive sensing
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Compressed Sensing and Redundant Dictionaries
- Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$-Minimization
- Learnability, stability and uniform convergence
- On the Minimax Risk of Dictionary Learning
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD
- Parseval proximal neural networks
- Probability in Banach spaces. Isoperimetry and processes
- Rademacher penalties and structural risk minimization
- Robustness and generalization
- STABILITY RESULTS IN LEARNING THEORY
- Sample Complexity of Dictionary Learning and Other Matrix Factorizations
- Sample complexity for learning recurrent perceptron mappings
- Size-independent sample complexity of neural networks
- Solving inverse problems using data-driven models
- The sample complexity of dictionary learning
- Understanding machine learning. From theory to algorithms
- Upper and lower bounds for stochastic processes. Modern methods and classical problems
- Vapnik-Chervonenkis dimension of recurrent neural networks
Cited in
(5)- Tighter guarantees for the compressive multi-layer perceptron
- Estimates on compressed neural networks regression
- A provably convergent scheme for compressive sensing under random generative priors
- Angular scattering function estimation using deep neural networks
- Learning to optimize: a tutorial for continuous and mixed-integer optimization
This page was built for publication: Compressive sensing and neural networks from a statistical learning perspective
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2106482)