Learning Interface Conditions in Domain Decomposition Solvers
From MaRDI portal
Publication:6399652
arXiv2205.09833MaRDI QIDQ6399652FDOQ6399652
Authors: Ali Taghibakhshi, Nicolas Nytko, Tareq Zaman, Scott MacLachlan, Luke Olson, Matthew West
Publication date: 19 May 2022
Abstract: Domain decomposition methods are widely used and effective in the approximation of solutions to partial differential equations. Yet the optimal construction of these methods requires tedious analysis and is often available only in simplified, structured-grid settings, limiting their use for more complex problems. In this work, we generalize optimized Schwarz domain decomposition methods to unstructured-grid problems, using Graph Convolutional Neural Networks (GCNNs) and unsupervised learning to learn optimal modifications at subdomain interfaces. A key ingredient in our approach is an improved loss function, enabling effective training on relatively small problems, but robust performance on arbitrarily large problems, with computational cost linear in problem size. The performance of the learned linear solvers is compared with both classical and optimized domain decomposition algorithms, for both structured- and unstructured-grid problems.
Has companion code repository: https://github.com/compdyn/learning-oras
This page was built for publication: Learning Interface Conditions in Domain Decomposition Solvers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6399652)