Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
From MaRDI portal
Publication:5881138
DOI10.1080/01621459.2020.1853547zbMath1506.68108arXiv2004.04767OpenAlexW3107882811MaRDI QIDQ5881138
Tengyuan Liang, Unnamed Author
Publication date: 9 March 2023
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.04767
Galton-Watson processdeep neural networkscompositional kernelsrandom features regressionrandom weights initialization
Artificial neural networks and deep learning (68T07) Applications of branching processes (60J85) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- Spherical harmonics and approximations on the unit sphere. An introduction
- Bayesian learning for neural networks
- Surprises in high-dimensional ridgeless least squares interpolation
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- Just interpolate: kernel ``ridgeless regression can generalize
- Optimal rates for the regularized least-squares algorithm
- Positive definite functions on spheres
- Probability on Trees and Networks
- A mean field view of the landscape of two-layer neural networks
- Two Models of Double Descent for Weak Features
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Benign overfitting in linear regression
- Does learning require memorization? a short tale about a long tail
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Mean Field Analysis of Neural Networks: A Law of Large Numbers
- Breaking the Curse of Dimensionality with Convex Neural Networks
- A Limit Theorem for Multidimensional Galton-Watson Processes
- Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits