Pages that link to "Item:Q2301498"
From MaRDI portal
The following pages link to Mean field analysis of neural networks: a central limit theorem (Q2301498):
Displaying 26 items.
- Machine learning from a continuous viewpoint. I (Q829085) (← links)
- Analysis of a two-layer neural network via displacement convexity (Q1996787) (← links)
- Linearized two-layers neural networks in high dimension (Q2039801) (← links)
- Normalization effects on shallow neural networks and related asymptotic expansions (Q2072629) (← links)
- Asymptotic properties of one-layer artificial neural networks with sparse connectivity (Q2105365) (← links)
- Mean-field and kinetic descriptions of neural differential equations (Q2148968) (← links)
- A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics (Q2197845) (← links)
- Mirror descent algorithms for minimizing interacting free energy (Q2204545) (← links)
- Optimization for deep learning: an overview (Q2218095) (← links)
- Landscape and training regimes in deep learning (Q2231925) (← links)
- (Q4998974) (← links)
- (Q5011561) (← links)
- Align, then memorise: the dynamics of learning with feedback alignment* (Q5049525) (← links)
- Align, then memorise: the dynamics of learning with feedback alignment* (Q5055410) (← links)
- Particle dual averaging: optimization of mean field neural network with global convergence rate analysis* (Q5055425) (← links)
- Mean Field Analysis of Deep Neural Networks (Q5076694) (← links)
- Asymptotics of Reinforcement Learning with Neural Networks (Q5084496) (← links)
- Unbiased Deep Solvers for Linear Parametric PDEs (Q5093244) (← links)
- Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance (Q5157837) (← links)
- Machine Learning and Computational Mathematics (Q5162355) (← links)
- Mean Field Analysis of Neural Networks: A Law of Large Numbers (Q5219306) (← links)
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup* (Q5857458) (← links)
- Continuous limits of residual neural networks in case of large input data (Q6098879) (← links)
- High‐dimensional limit theorems for SGD: Effective dynamics and critical scaling (Q6182180) (← links)
- Normalization effects on deep neural networks (Q6194477) (← links)
- Efficient and stable SAV-based methods for gradient flows arising from deep learning (Q6497260) (← links)