Deep learning models for global coordinate transformations that linearise PDEs

From MaRDI portal
Publication:5014841

DOI10.1017/S0956792520000327zbMATH Open1479.35021arXiv1911.02710OpenAlexW3088290697WikidataQ114116642 ScholiaQ114116642MaRDI QIDQ5014841FDOQ5014841

J. N. Kutz, S. L. Brunton, Bethany Lusch, Craig Gin

Publication date: 8 December 2021

Published in: European Journal of Applied Mathematics (Search for Journal in Brave)

Abstract: We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE. Our architecture is motivated by the linearizing transformations provided by the Cole-Hopf transform for Burgers equation and the inverse scattering transform for completely integrable PDEs. By leveraging a residual network architecture, a near-identity transformation can be exploited to encode intrinsic coordinates in which the dynamics are linear. The resulting dynamics are given by a Koopman operator matrix mathbfK. The decoder allows us to transform back to the original coordinates as well. Multiple time step prediction can be performed by repeated multiplication by the matrix mathbfK in the intrinsic coordinates. We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs.


Full work available at URL: https://arxiv.org/abs/1911.02710




Recommendations




Cites Work


Cited In (8)

Uses Software





This page was built for publication: Deep learning models for global coordinate transformations that linearise PDEs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5014841)