Normalization effects on deep neural networks
From MaRDI portal
Publication:6194477
DOI10.3934/fods.2023004arXiv2209.01018OpenAlexW4323905478MaRDI QIDQ6194477
Jiahui Yu, Konstantinos V. Spiliopoulos
Publication date: 14 February 2024
Published in: Foundations of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2209.01018
neural networksasymptotic expansionsmachine learningout-of-sample performancescaling effectsnormalization effect
Central limit and other weak theorems (60F05) Stochastic processes (60G99) General topics in artificial intelligence (68T01)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Machine learning strategies for systems with invariance properties
- Approximation and estimation bounds for artificial neural networks
- Multilayer feedforward networks are universal approximators
- Large deviations and mean-field theory for asymmetric random recurrent neural networks
- Nonlinearity creates linear independence
- DGM: a deep learning algorithm for solving partial differential equations
- Normalization effects on shallow neural networks and related asymptotic expansions
- Gradient descent optimizes over-parameterized deep ReLU networks
- Mean field analysis of neural networks: a central limit theorem
- A mean field view of the landscape of two-layer neural networks
- Mean Field Analysis of Deep Neural Networks
- Asymptotics of Reinforcement Learning with Neural Networks
- Mean Field Analysis of Neural Networks: A Law of Large Numbers
- Universal features of price formation in financial markets: perspectives from deep learning
- Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
- Scaling description of generalization with number of parameters in deep learning
This page was built for publication: Normalization effects on deep neural networks