Neural Control of Discrete Weak Formulations: Galerkin, Least-Squares and Minimal-Residual Methods with Quasi-Optimal Weights
From MaRDI portal
Publication:6402170
DOI10.1016/J.CMA.2022.115716arXiv2206.07475MaRDI QIDQ6402170FDOQ6402170
Authors: Ignacio Brevis, Ignacio Muga, K. G. van der Zee
Publication date: 15 June 2022
Abstract: There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyse a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least-squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework.
Artificial neural networks and deep learning (68T07) Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs (65N30)
This page was built for publication: Neural Control of Discrete Weak Formulations: Galerkin, Least-Squares and Minimal-Residual Methods with Quasi-Optimal Weights
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6402170)