Robust and resource-efficient identification of two hidden layer neural networks
From MaRDI portal
Publication:2117339
DOI10.1007/s00365-021-09550-5zbMath1504.65042arXiv1907.00485OpenAlexW3176108623MaRDI QIDQ2117339
Michael Rauchensteiner, Massimo Fornasier, Timo Klock
Publication date: 21 March 2022
Published in: Constructive Approximation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.00485
framesdeparametrizationdeep neural networksactive samplingnonconvex optimization on matrix spacesexact identifiability
Artificial neural networks and deep learning (68T07) Nonconvex programming, global optimization (90C26) Algorithms for approximation of functions (65D15)
Related Items
Efficient Identification of Butterfly Sparse Matrix Factorizations ⋮ Approximate real symmetric tensor rank ⋮ Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples ⋮ Information theory and recovery algorithms for data fusion in Earth observation
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A mathematical introduction to compressive sensing
- Learning functions of few arbitrary linear parameters in high dimensions
- Finding a low-rank basis in a matrix subspace
- Entropy and sampling numbers of classes of ridge functions
- Estimation of the mean of a multivariate normal distribution
- Semiparametric least squares (SLS) and weighted SLS estimation of single-index models
- Reconstructing a neural net from its output
- Interpolation by ridge polynomials and its application in neural networks
- Provable approximation properties for deep neural networks
- Direct estimation of the index coefficient in a single-index model
- Finite normalized tight frames
- Weak convergence and empirical processes. With applications to statistics
- Capturing ridge functions in high dimensions from point queries
- Active Subspace Methods in Theory and Practice: Applications to Kriging Surfaces
- High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models
- Active Subspaces
- Tensor rank is NP-complete
- Greed is Good: Algorithmic Results for Sparse Approximation
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- Approximation by Ridge Functions and Neural Networks
- High-Dimensional Probability
- DeepStack: Expert-level artificial intelligence in heads-up no-limit poker
- Energy Propagation in Deep Convolutional Neural Networks
- Neural Network Learning
- Deep Neural Network Approximation Theory
- Size-independent sample complexity of neural networks
- Robust and resource efficient identification of shallow neural networks by fewest samples
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Most Tensor Problems Are NP-Hard
- Understanding Machine Learning
- Perturbation bounds in connection with singular value decomposition