Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs

From MaRDI portal
Publication:2095545

DOI10.1007/S10444-022-09985-9zbMATH Open1502.65170arXiv2106.14473OpenAlexW3186608048MaRDI QIDQ2095545FDOQ2095545


Authors: Tim De Ryck, Siddhartha Mishra Edit this on Wikidata


Publication date: 17 November 2022

Published in: Advances in Computational Mathematics (Search for Journal in Brave)

Abstract: Physics informed neural networks approximate solutions of PDEs by minimizing pointwise residuals. We derive rigorous bounds on the error, incurred by PINNs in approximating the solutions of a large class of linear parabolic PDEs, namely Kolmogorov equations that include the heat equation and Black-Scholes equation of option pricing, as examples. We construct neural networks, whose PINN residual (generalization error) can be made as small as desired. We also prove that the total L2-error can be bounded by the generalization error, which in turn is bounded in terms of the training error, provided that a sufficient number of randomly chosen training (collocation) points is used. Moreover, we prove that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context. These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs.


Full work available at URL: https://arxiv.org/abs/2106.14473




Recommendations




Cites Work


Cited In (29)

Uses Software





This page was built for publication: Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2095545)