Deep limits of residual neural networks
DOI10.1007/s40687-022-00370-yOpenAlexW2898318847MaRDI QIDQ2679108
Yves van Gennip, Matthew Thorpe
Publication date: 19 January 2023
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.11741
regularityordinary differential equationsgamma-convergencevariational convergencedeep neural networksdeep layer limits
Artificial neural networks and deep learning (68T07) Neural networks for/in biological studies, artificial life and related topics (92B20) Methods involving semicontinuity and convergence; relaxation (49J45) Existence theories for optimal control problems involving ordinary differential equations (49J15) Applications of difference equations (39A60)
Related Items (4)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Continuum limit of total variation on point clouds
- Local minimization, variational evolution and \(\Gamma\)-convergence
- A variational approach to the consistency of spectral clustering
- An introduction to \(\Gamma\)-convergence
- Multilayer feedforward networks are universal approximators
- \(\Gamma\)-convergence of graph Ginzburg-Landau functionals
- Forward stability of ResNet and its variants
- Deep neural networks motivated by partial differential equations
- Deep relaxation: partial differential equations for optimizing deep neural networks
- A mean-field optimal control formulation of deep learning
- Blended coarse gradient descent for full quantization of deep neural networks
- A proposal on machine learning via dynamical systems
- Discrete Mathematics of Neural Networks
- Consistency of Cheeger and Ratio Graph Cuts
- Analysis of recursive stochastic algorithms
- A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
- Stable architectures for deep neural networks
- Partial differential equation regularization for supervised machine learning
- Structure-preserving deep learning
- Neural networks and complexity theory
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Analysis of $p$-Laplacian Regularization in Semisupervised Learning
- Deep Learning: An Introduction for Applied Mathematicians
- On a Stochastic Approximation Method
- Approximation by superpositions of a sigmoidal function
- PDE-based group equivariant convolutional neural networks
This page was built for publication: Deep limits of residual neural networks