Compressive sensing and neural networks from a statistical learning perspective
From MaRDI portal
Publication:2106482
DOI10.1007/978-3-031-09745-4_8zbMATH Open1504.94027arXiv2010.15658OpenAlexW3153348027MaRDI QIDQ2106482FDOQ2106482
Authors: A. Behboodi, Holger Rauhut, Ekkehard Schnoor
Publication date: 14 December 2022
Abstract: Various iterative reconstruction algorithms for inverse problems can be unfolded as neural networks. Empirically, this approach has often led to improved results, but theoretical guarantees are still scarce. While some progress on generalization properties of neural networks have been made, great challenges remain. In this chapter, we discuss and combine these topics to present a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements. The hypothesis class considered is inspired by the classical iterative soft-thresholding algorithm (ISTA). The neural networks in this class are obtained by unfolding iterations of ISTA and learning some of the weights. Based on training samples, we aim at learning the optimal network parameters via empirical risk minimization and thereby the optimal network that reconstructs signals from their compressive linear measurements. In particular, we may learn a sparsity basis that is shared by all of the iterations/layers and thereby obtain a new approach for dictionary learning. For this class of networks, we present a generalization bound, which is based on bounding the Rademacher complexity of hypothesis classes consisting of such deep networks via Dudley's integral. Remarkably, under realistic conditions, the generalization error scales only logarithmically in the number of layers, and at most linear in number of measurements.
Full work available at URL: https://arxiv.org/abs/2010.15658
Recommendations
- NETT: solving inverse problems with deep neural networks
- A provably convergent scheme for compressive sensing under random generative priors
- Tighter guarantees for the compressive multi-layer perceptron
- Big in Japan: regularizing networks for solving inverse problems
- Solving ill-posed inverse problems using iterative deep neural networks
Artificial neural networks and deep learning (68T07) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Cites Work
- Title not available (Why is that?)
- Probability in Banach spaces. Isoperimetry and processes
- Understanding Machine Learning
- A mathematical introduction to compressive sensing
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- 10.1162/153244303321897690
- Upper and Lower Bounds for Stochastic Processes
- Compressed Sensing and Redundant Dictionaries
- Rademacher penalties and structural risk minimization
- Title not available (Why is that?)
- Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$-Minimization
- Learnability, stability and uniform convergence
- Robustness and generalization
- STABILITY RESULTS IN LEARNING THEORY
- On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD
- Parseval proximal neural networks
- Sample Complexity of Dictionary Learning and Other Matrix Factorizations
- Solving inverse problems using data-driven models
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Vapnik-Chervonenkis dimension of recurrent neural networks
- Sample complexity for learning recurrent perceptron mappings
- Size-independent sample complexity of neural networks
- A Vector-Contraction Inequality for Rademacher Complexities
- On the Minimax Risk of Dictionary Learning
Cited In (4)
Uses Software
This page was built for publication: Compressive sensing and neural networks from a statistical learning perspective
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2106482)