Deep learning models for global coordinate transformations that linearise PDEs

From MaRDI portal
Publication:5014841




Abstract: We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE. Our architecture is motivated by the linearizing transformations provided by the Cole-Hopf transform for Burgers equation and the inverse scattering transform for completely integrable PDEs. By leveraging a residual network architecture, a near-identity transformation can be exploited to encode intrinsic coordinates in which the dynamics are linear. The resulting dynamics are given by a Koopman operator matrix mathbfK. The decoder allows us to transform back to the original coordinates as well. Multiple time step prediction can be performed by repeated multiplication by the matrix mathbfK in the intrinsic coordinates. We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs.



Cites work



Describes a project that uses

Uses Software





This page was built for publication: Deep learning models for global coordinate transformations that linearise PDEs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5014841)