A framework for machine learning of model error in dynamical systems
DOI10.1090/cams/10arXiv2107.06658OpenAlexW3182899221MaRDI QIDQ6076655
Matthew E. Levine, Andrew M. Stuart
Publication date: 17 October 2023
Published in: Communications of the American Mathematical Society (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.06658
dynamical systemsrecurrent neural networksstatistical learningmodel errorreservoir computingrandom features
Knowledge representation (68T30) Numerical methods for initial value problems involving ordinary differential equations (65L05) Ergodic theorems, spectral theory, Markov operators (37A30) Time series analysis of dynamical systems (37M10) Approximation by other special function classes (41A30) Computational methods for ergodic theory (approximation of invariant measures, computation of Lyapunov exponents, entropy, etc.) (37M25)
Related Items (7)
Cites Work
- Bagging predictors
- Discovering governing equations from data by sparse identification of nonlinear dynamical systems
- Analysis of the 3DVAR filter for the partially observed Lorenz '63 model
- Reservoir computing approaches to recurrent neural network training
- Bayesian solution uncertainty quantification for differential equations
- Recurrent neural network closure of parametric POD-Galerkin reduced-order models based on the Mori-Zwanzig formalism
- Modeling of missing dynamical systems: deriving parametric models using a nonparametric framework
- Modelling and control of dynamic systems using Gaussian process models
- A family of embedded Runge-Kutta formulae
- Neural networks and dynamical systems
- Statistical inference for ergodic diffusion processes.
- Data-based stochastic model reduction for the Kuramoto-Sivashinsky equation
- A dynamic subgrid scale model for large eddy simulations based on the Mori-Zwanzig formalism
- Statistical and computational inverse problems.
- A computational strategy for multiscale systems with applications to Lorenz 96 model
- Machine-learning error models for approximate solutions to parameterized systems of nonlinear equations
- Model reduction and neural networks for parametric PDEs
- Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions
- Learning dynamical systems from data: a simple cross-validation perspective. I: Parametric kernel flows
- Supervised learning from noisy observations: combining machine-learning techniques with data assimilation
- Kernel-based prediction of non-Markovian time series
- Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network
- Data-driven model reduction, Wiener projections, and the Koopman-Mori-Zwanzig formalism
- Operator-theoretic framework for forecasting nonlinear time series with kernel analog techniques
- Machine learning for prediction with missing dynamics
- Solving and learning nonlinear PDEs with Gaussian processes
- Stable \textit{a posteriori} LES of 2D turbulence using convolutional neural networks: backscattering analysis and generalization to higher \(Re\) via transfer learning
- Modelling spatiotemporal dynamics from Earth observation data with neural differential equations
- Echo state networks are universal
- Variance continuity for Lorenz flows
- A priori estimates of the population risk for two-layer neural networks
- Data-driven spectral analysis of the Koopman operator
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Consistency of maximum likelihood estimation for some dynamical systems
- On dynamic mode decomposition: theory and applications
- Numerical techniques for multi-scale dynamical systems with stochastic effects
- The Random Feature Model for Input-Output Maps between Banach Spaces
- Optimal prediction and the Mori–Zwanzig representation of irreversible processes
- On the estimation of the Mori-Zwanzig memory integral
- A priori estimation of memory effects in reduced-order models of nonlinear systems using the Mori–Zwanzig formalism
- Extracting Sparse High-Dimensional Dynamics from Limited Data
- Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data
- Stable architectures for deep neural networks
- Deterministic Nonperiodic Flow
- Orthogonal Gaussian process models
- Kernel Analog Forecasting: Multiscale Test Problems
- Augmenting physical models with deep networks for complex dynamics forecasting*
- Learning stochastic closures using ensemble Kalman inversion
- Autodifferentiable Ensemble Kalman Filters
- The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation
- Extracting Structured Dynamical Systems Using Sparse Optimization With Very Few Samples
- Learning latent dynamics for partially observed chaotic systems
- Discovery of Dynamics Using Linear Multistep Methods
- SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics
- Model Reduction with Memory and the Machine Learning of Dynamical Systems
- Probabilistic Forecasting and Bayesian Data Assimilation
- Data-driven discovery of coordinates and governing equations
- Data Assimilation
- Data Assimilation
- Learning representations by back-propagating errors
- Turbulence Modeling in the Age of Data
- Central limit theorems and invariance principles for Lorenz attractors
- Multiscale Methods
- On Information and Sufficiency
- Inverse acoustic and electromagnetic scattering theory
- Smoothing noisy data with spline functions
- Approximation by superpositions of a sigmoidal function
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A framework for machine learning of model error in dynamical systems