Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations
DOI10.1137/19M125649XzbMath1480.60191arXiv1809.03062MaRDI QIDQ5037569
Arnulf Jentzen, Julius Berner, Philipp Grohs
Publication date: 1 March 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.03062
curse of dimensionalityKolmogorov equationgeneralization errorempirical risk minimizationdeep learning
Learning and adaptive systems in artificial intelligence (68T05) Applications of stochastic analysis (to PDEs, etc.) (60H30) Numerical solutions to stochastic differential and integral equations (65C30) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (52)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Provable approximation properties for deep neural networks
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- A distribution-free theory of nonparametric regression
- DGM: a deep learning algorithm for solving partial differential equations
- Topological properties of the set of functions generated by neural networks of fixed size
- Solving the Kolmogorov PDE by means of deep learning
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Error bounds for approximations with deep ReLU networks
- Asymptotic expansion as prior knowledge in deep learning method for high dimensional BSDEs
- Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
- Loss of regularity for Kolmogorov equations
- Stochastic simulation and Monte Carlo methods. Mathematical foundations of stochastic simulation
- Local Rademacher complexities
- On the mathematical foundations of learning
- Learning Theory
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Neural Network Learning
- A mean field view of the landscape of two-layer neural networks
- Solving high-dimensional partial differential equations using deep learning
- Deep Neural Network Approximation Theory
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems
- Deep optimal stopping
- Tools for computational finance.
- Error bounds for approximation with neural networks
This page was built for publication: Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations