The Principles of Deep Learning Theory
From MaRDI portal
Publication:5070199
DOI10.1017/9781009023405zbMath1507.68003arXiv2106.10165OpenAlexW3176723190WikidataQ113991308 ScholiaQ113991308MaRDI QIDQ5070199
Publication date: 11 April 2022
Full work available at URL: https://arxiv.org/abs/2106.10165
neural networksrenormalization grouprepresentation theoryneural tangent kernelnearly-Gaussian distributions
Artificial neural networks and deep learning (68T07) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to computer science (68-01)
Related Items (7)
Unified field theoretical approach to deep and recurrent neuronal networks ⋮ Asymptotics of representation learning in finite Bayesian neural networks* ⋮ Inferring parameters of pyramidal neuron excitability in mouse models of Alzheimer's disease using biophysical modeling and deep learning ⋮ Random neural networks in the infinite width limit as Gaussian processes ⋮ \(p\)-adic statistical field theory and deep belief networks ⋮ On random matrices arising in deep neural networks: General I.I.D. case ⋮ -Stable convergence of heavy-/light-tailed infinitely wide neural networks
This page was built for publication: The Principles of Deep Learning Theory