A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
DOI10.1007/S11222-022-10169-0zbMATH Open1499.62026arXiv2201.08652OpenAlexW4307156392MaRDI QIDQ2103975FDOQ2103975
Authors: Xiaoyu Ma, S. Sardy, Nick Hengartner, Nikolai Bobenko, Yen Ting Lin
Publication date: 9 December 2022
Published in: Statistics and Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2201.08652
Recommendations
- False discoveries occur early on the Lasso path
- Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms
- Asymptotic properties of one-layer artificial neural networks with sparse connectivity
- Consistent Sparse Deep Learning: Theory and Computation
- Testing for neglected nonlinearity using artificial neural networks with many randomized hidden unit activations
Computational methods for problems pertaining to statistics (62-08) Ridge regression; shrinkage estimators (Lasso) (62J07) Artificial neural networks and deep learning (68T07)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Quantile universal threshold
- Title not available (Why is that?)
- Ideal spatial adaptation by wavelet shrinkage
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Random forests
- A survey of cross-validation procedures for model selection
- Title not available (Why is that?)
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Atomic Decomposition by Basis Pursuit
- Universal approximation bounds for superpositions of a sigmoidal function
- Title not available (Why is that?)
- Model Selection and Estimation in Regression with Grouped Variables
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
- Learning representations by back-propagating errors
- Decoding by Linear Programming
- Compressed sensing
- Optimization with sparsity-inducing penalties
- Approximation by superpositions of a sigmoidal function
- The Noise-Sensitivity Phase Transition in Compressed Sensing
- Transformed \(\ell_1\) regularization for learning sparse deep neural networks
- Optimal approximation with sparsely connected deep neural networks
- Make \(\ell_1\) regularization effective in training sparse CNN
- High-dimensional dynamics of generalization error in neural networks
- Covariance stabilizing transformations
- Deep Neural Network Approximation Theory
- The gap between theory and practice in function approximation with deep neural networks
- Scaling description of generalization with number of parameters in deep learning
- Surprises in high-dimensional ridgeless least squares interpolation
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles
- Consistent Sparse Deep Learning: Theory and Computation
Uses Software
This page was built for publication: A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2103975)