Energy-dissipative evolutionary deep operator neural networks
From MaRDI portal
Publication:6187616
Abstract: Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network. It is designed to seed numerical solutions for a class of partial differential equations instead of a single partial differential equation, such as partial differential equations with different parameters or different initial conditions. The network consists of two sub-networks, the Branch net and the Trunk net. For an objective operator G, the Branch net encodes different input functions u at the same number of sensors, and the Trunk net evaluates the output function at any location. By minimizing the error between the evaluated output q and the expected output G(u)(y), DeepONet generates a good approximation of the operator G. In order to preserve essential physical properties of PDEs, such as the Energy Dissipation Law, we adopt a scalar auxiliary variable approach to generate the minimization problem. It introduces a modified energy and enables unconditional energy dissipation law at the discrete level. By taking the parameter as a function of time t, this network can predict the accurate solution at any further time with feeding data only at the initial state. The data needed can be generated by the initial conditions, which are readily available. In order to validate the accuracy and efficiency of our neural networks, we provide numerical simulations of several partial differential equations, including heat equations, parametric heat equations and Allen-Cahn equations.
Recommendations
Cites work
- A diffuse-interface method for simulating two-phase flows of complex fluids
- A physics-informed operator regression framework for extracting data-driven continuum models
- A rapidly converging phase field model
- Approximation by superpositions of a sigmoidal function
- ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains
- Convergence of the phase field model to its sharp interface limits
- DEEP LEARNING OF PARAMETERIZED EQUATIONS WITH APPLICATIONS TO UNCERTAINTY QUANTIFICATION
- DIFFUSE-INTERFACE METHODS IN FLUID MECHANICS
- Data-driven deep learning of partial differential equations in modal space
- Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ
- Free energy of a nonuniform system. I: Interfacial free energy
- Multilayer feedforward networks are universal approximators
- Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
- Solving parametric PDE problems with artificial neural networks
- TWO-PHASE BINARY FLUIDS AND IMMISCIBLE FLUIDS DESCRIBED BY AN ORDER PARAMETER
- The Random Feature Model for Input-Output Maps between Banach Spaces
- The scalar auxiliary variable (SAV) approach for gradient flows
Cited in
(1)
This page was built for publication: Energy-dissipative evolutionary deep operator neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6187616)