The Continuous Formulation of Shallow Neural Networks as Wasserstein-Type Gradient Flows
From MaRDI portal
Publication:5886422
DOI10.1007/978-3-031-05331-3_3OpenAlexW4312382631MaRDI QIDQ5886422
Xavier Fernández-Real, Alessio Figalli
Publication date: 5 April 2023
Published in: Analysis at Large (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-031-05331-3_3
Artificial neural networks and deep learning (68T07) Stochastic analysis (60Hxx) Transport equations (35Q49)
Cites Work
- Unnamed Item
- Unnamed Item
- Machine learning from a continuous viewpoint. I
- Analysis of a two-layer neural network via displacement convexity
- An invitation to optimal transport, Wasserstein distances, and gradient flows
- A mean-field optimal control formulation of deep learning
- A proposal on machine learning via dynamical systems
- The Variational Formulation of the Fokker--Planck Equation
- A mean field view of the landscape of two-layer neural networks
- Mean Field Analysis of Deep Neural Networks
- Mean Field Analysis of Neural Networks: A Law of Large Numbers
This page was built for publication: The Continuous Formulation of Shallow Neural Networks as Wasserstein-Type Gradient Flows