Deep learning models for global coordinate transformations that linearise PDEs
From MaRDI portal
Publication:5014841
Abstract: We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE. Our architecture is motivated by the linearizing transformations provided by the Cole-Hopf transform for Burgers equation and the inverse scattering transform for completely integrable PDEs. By leveraging a residual network architecture, a near-identity transformation can be exploited to encode intrinsic coordinates in which the dynamics are linear. The resulting dynamics are given by a Koopman operator matrix . The decoder allows us to transform back to the original coordinates as well. Multiple time step prediction can be performed by repeated multiplication by the matrix in the intrinsic coordinates. We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs.
Recommendations
- Deep hidden physics models: deep learning of nonlinear partial differential equations
- Data-driven discovery of coordinates and governing equations
- Data-driven discovery of PDEs in complex datasets
- PDE-Net 2.0: learning PDEs from data with a numeric-symbolic hybrid deep network
- Linearly recurrent autoencoder networks for learning dynamics
Cites work
- scientific article; zbMATH DE number 6678650 (Why is no real title available?)
- scientific article; zbMATH DE number 3739989 (Why is no real title available?)
- scientific article; zbMATH DE number 3284515 (Why is no real title available?)
- A data-driven approximation of the koopman operator: extending dynamic mode decomposition
- A kernel-based method for data-driven Koopman spectral analysis
- A variational approach to modeling slow processes in stochastic dynamical systems
- Analysis of Fluid Flows via Spectral Properties of the Koopman Operator
- Applied Koopman theory for partial differential equations and data-driven modeling of spatio-temporal systems
- Approximation by superpositions of a sigmoidal function
- Comparison of systems with complex behavior
- Data-driven discovery of coordinates and governing equations
- Data-driven model reduction and transfer operator approximation
- Data-driven modeling and scientific computation. Methods for complex systems and big data
- Data-driven science and engineering. Machine learning, dynamical systems, and control
- Deep learning
- Dynamic mode decomposition of numerical and experimental data
- Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator
- Fourth-Order Time-Stepping for Stiff PDEs
- Geometry of the ergodic quotient reveals coherent structures in flows
- Hamiltonian Systems and Transformation in Hilbert Space
- Introduction to Applied Nonlinear Dynamical Systems and Chaos
- Linearly recurrent autoencoder networks for learning dynamics
- On a quasi-linear parabolic equation occurring in aerodynamics
- On some dissipative fully discrete nonlinear Galerkin schemes for the Kuramoto-Sivashinsky equation
- Parsimonious representation of nonlinear dynamical systems through manifold learning: a chemotaxis case study
- Pattern recognition and machine learning.
- Physics-Informed Probabilistic Learning of Linear Embeddings of Nonlinear Dynamics with Guaranteed Stability
- Spectral analysis of nonlinear flows
- Spectral properties of dynamical systems, model reduction and decompositions
- The Method of Near-Identity Transformations and Its Applications
- The partial differential equation ut + uux = μxx
Cited in
(8)- Modern Koopman theory for dynamical systems
- Connections between deep learning and partial differential equations
- Parsimony as the ultimate regularizer for physics-informed machine learning
- Koopman neural operator as a mesh-free solver of non-linear partial differential equations
- \(\Phi\)-DVAE: physics-informed dynamical variational autoencoders for unstructured data assimilation
- Machine learning methods for reduced order modeling
- Semi-supervised invertible neural operators for Bayesian inverse problems
- Bridging the gap: machine learning to resolve improperly modeled dynamics
This page was built for publication: Deep learning models for global coordinate transformations that linearise PDEs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5014841)