Neural ODEs as the deep limit of ResNets with constant weights
DOI10.1142/S0219530520400023MaRDI QIDQ4995042
Publication date: 23 June 2021
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.12183
ordinary differential equationpartial differential equationsmachine learningstochastic gradient descentFokker-Planckdeep neural networkneural ODEResNet
Point estimation (62F10) Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10) Learning and adaptive systems in artificial intelligence (68T05) Theoretical approximation of solutions to ordinary differential equations (34A45) Stability and convergence of numerical methods for ordinary differential equations (65L20) Fokker-Planck equations (35Q84)
Related Items (5)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An introduction to \(\Gamma\)-convergence
- Statistical inference for model parameters in stochastic gradient descent
- On uniqueness problems related to the Fokker-Planck-Kolmogorov equation for measures
- Universality of deep convolutional neural networks
- Deep relaxation: partial differential equations for optimizing deep neural networks
- A mean-field optimal control formulation of deep learning
- On Uniqueness of a Probability Solution to the Cauchy Problem for the Fokker–Planck–Kolmogorov Equation
- Existence and Uniqueness of Solutions to Fokker–Planck Type Equations with Irregular Coefficients
- Logarithmic Sobolev Inequalities
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- Deep Learning: An Introduction for Applied Mathematicians
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- An introduction to the theory of large deviations
- Optimal Transport
This page was built for publication: Neural ODEs as the deep limit of ResNets with constant weights