Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
From MaRDI portal
Publication:6590448
Recommendations
Cites work
- scientific article; zbMATH DE number 4082316 (Why is no real title available?)
- scientific article; zbMATH DE number 46753 (Why is no real title available?)
- A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables
- A note on the Hanson-Wright inequality for random vectors with dependencies
- A note on the Pennington-Worah distribution
- A random matrix approach to neural networks
- Alice and Bob Meet Banach
- An Inverse Matrix Adjustment Arising in Discriminant Analysis
- Asymptotic freeness of layerwise Jacobians caused by invariance of multilayer perceptron: the Haar orthogonal case
- Asymptotic normality for eigenvalue statistics of a general sample covariance matrix when \(p/n \to \infty\) and applications
- CLT for linear spectral statistics of normalized sample covariance matrices with the dimension much larger than the sample size
- Concentration inequalities. A nonasymptotic theory of independence
- Convergence of the largest eigenvalue of normalized sample covariance matrices when \(p\) and \(n\) both tend to infinity with their ratio converging to zero
- Convergence to the semicircle law
- Deep learning: a statistical viewpoint
- Eigenvalue distribution of some nonlinear models of random matrices
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
- Hanson-Wright inequality and sub-Gaussian concentration
- High-dimensional probability. An introduction with applications in data science
- Just interpolate: kernel ``ridgeless regression can generalize
- Large-dimensional random matrix theory and its applications in deep learning and wireless communications
- Learning curves of generic features maps for realistic datasets with a teacher-student model*
- Lectures on the Combinatorics of Free Probability
- Limiting spectral distribution of normalized sample covariance matrices with \(p/n\to 0\)
- Limiting spectral distribution of renormalized separable sample covariance matrices when \(p/n\to 0\)
- On the equivalence between kernel quadrature rules and random feature expansions
- Partial transposition of random states and non-centered semicircular distributions
- Spectral analysis of large dimensional random matrices
- Spiked singular values and vectors under extreme aspect ratios
- Strong convergence of ESD for the generalized sample covariance matrices when p/n 0
- Testing the sphericity of a covariance matrix when the dimension is much larger than the sample size
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- The PPT square conjecture holds generically for some classes of independent states
- The interpolation phase transition in neural networks: memorization and generalization under lazy training
- The limiting distributions of eigenvalues of sample correlation matrices
- The limiting spectral distribution of the product of the Wigner matrix and a nonnegative definite matrix
- The smallest eigenvalue of a large dimensional Wishart matrix
- Universality Laws for High-Dimensional Learning With Random Features
- User-friendly tail bounds for sums of random matrices
This page was built for publication: Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6590448)