COMPARING PARTITIONED COUPLING VS. MONOLITHIC BLOCK PRECONDITIONING FOR SOLVING STOKES-DARCY SYSTEMS: Difference between revisions
Line 240: | Line 240: | ||
! Format Representation | ! Format Representation | ||
! Format Exchange | ! Format Exchange | ||
! | ! Binary/Text | ||
! | ! Proprietary | ||
! | ! To Publish | ||
! | ! To Archive | ||
|- | |- | ||
| | | | ||
Line 266: | Line 266: | ||
! Format Representation | ! Format Representation | ||
! Format Exchange | ! Format Exchange | ||
! | ! Binary/Text | ||
! | ! Proprietary | ||
! | ! To Publish | ||
! | ! To Archive | ||
|- | |- | ||
| | | | ||
Line 309: | Line 309: | ||
=== Mathematical Reproducibility === | === Mathematical Reproducibility === | ||
Yes, by all parameters | |||
=== Runtime Reproducibility === | === Runtime Reproducibility === | ||
Yes, for same input samples | |||
=== Reproducibility of Results === | === Reproducibility of Results === |
Revision as of 12:03, 13 July 2022
COMPARING PARTITIONED COUPLING VS. MONOLITHIC BLOCK PRECONDITIONING FOR SOLVING STOKES-DARCY SYSTEMS
PID (if applicable): arxiv:2108.13229
Problem Statement
Instationary, coupled Stokes-Darcy two-domain problem: coupled systems of free flow adjacent to permeable (porous) media Two approaches ( partioned coupling and monolitihic block preconditioning) are compared against each other and with the direct solving.
Object of Research and Objective
Analysis of the runtime and memory behavior of the two methods " partioned coupling" and "monolitihic block preconditioning" in comparison to the direct solver. The motivation for this is to avoid problems when using simple direct solvers (sparse direct solvers for the (linearized) subproblems): bad parallel scaling, untrustworthy solution with bad conditioning.
Procedure
Perform three types of solving a Darcy-Stokes system, namely Partioned Coupling, Monolithic Block Preconditioning, and Direct Solving, with subsequent comparison of runtime and memory behavior.
Involved Disciplines
Environmental Systems, Mathematics
Data Streams
Model
Stokes flow in the free-flow domain Darcy’s law for the porous domain
Discretization
- Time: first-order backward Euler scheme
- Space: finite volumes
- Porous Domain (Darcy): mit two-point flux approximation for pressure
- Free Flow domain (Stokes): staggered grid for pressure and velocity, upwind scheme for approximation of fluxes
Variables
Name | Unit | Symbol |
---|---|---|
Pressure (Dirichlet pressure) | - | |
Velocity (Neumann velocity) | - |
Process Informationen
Process Steps
Applied Methods
ID | Name | Process Step | Parameter | implemented by |
---|---|---|---|---|
wikidata:Q7001954 | Dirichlet-Neumann coupling | Coupling | ||
wikidata:Q1683631 | Picard iteration mit fixed-point iteration | Coupling | ||
wikidata:Q25098909 | inverse least-squares interface quasi-Newton | Coupling | ||
wikidata:Q1069090 | block-Gauss-Seidel method | Preconditioner | ||
wikidata:Q2467290 | Umfpack | Solver | ||
wikidata:Q56564057 | PD-GMRES | Solver | k (subiteration parameter, determined automatically), tolerance: relative residual... | |
wikidata:Q56560244 | Bi-CGSTAB | Solver | tolerance: relative residual... | |
wikidata:Q1471828 | AMG method | Preconditioner | ||
wikidata: Q17144437 | Uzawa-iterations | Preconditioner | ||
wikidata:Q1654069 | ILU(0) factorization | Preconditioner |
Software used
ID | Name | Description | Version | Programming Language | Dependencies | versioned | published | documented |
---|---|---|---|---|---|---|---|---|
sw:8713 | preCICE | Library for coupling simulations | v2104.0 | Core library in C++; Bindings for Fortran, Python, C, Matlab; Adaptors dependant on simulation code (Fortran, Python, C, Matlab) | Linux, Boost, MPI, ... | https://github.com/precice/precice | https://doi.org/10.18419/darus-2125 | https://precice.org/docs.html |
sw:14293 | DuMux | DUNE for Multi-{Phase, Component, Scale, Physics, …} flow and transport in porous media | C++, python-bindings, utility-skripts in python | Linux, DuNE (C++ Framework), cmake (module chains), package-config, compiler, build-essentials, dpg | https://git.iws.uni-stuttgart.de/dumux-repositories/dumux | https://zenodo.org/record/5152939#.YQva944zY2w | https://dumux.org/docs/ | |
sw:18749 | ISTL | Iterative Solver Template Library” (ISTL) which is part of the “Distributed and Unified Numerics Environment” (DUNE). | C++ | Linux, DuNE (C++ Framework) | https://gitlab.dune-project.org/core/dune-istl | https://doi.org/10.1007/978-3-540-75755-9_82 | https://www.dune-project.org/modules/dune-istl/ |
Hardware
ID | Name | Processor | Compiler | #Nodes | #Cores |
---|---|---|---|---|---|
AMD EPYC 7551P CPU | 1 | 1 |
Input Data
ID | Name | Size | Data Structure | Format Representation | Format Exchange | Binary/Text | Proprietary | To Publish | To Archive |
---|---|---|---|---|---|---|---|---|---|
LGS | O() Matrix size | Data structure in DUNE/DuMux | numbers | open | ? | ? |
Output Data
ID | Name | Size | Data Structure | Format Representation | Format Exchange | Binary/Text | Proprietary | To Publish | To Archive |
---|---|---|---|---|---|---|---|---|---|
Runtime comparison of the direct solver Umfpack and iterative solvers with partitioned coupling and block preconditioning | small | statistics | numbers | ? | ? | ||||
Runtime comparison of the best performing solver configurations | small | statistics | numbers | ? | ? | ||||
arxiv:2108.13229 | Paper | O(KB) | text | open | yes | yes |
Reproducibility
Mathematical Reproducibility
Yes, by all parameters
Runtime Reproducibility
Yes, for same input samples
Reproducibility of Results
Due to floating point arithmetic, no bitwise reproducibility
Reproducibility on original Hardware
Reproducibility on other Hardware
a) Serial Computation
b) Parallel Computation
Transferability to
a) similar model parameters (other initial and boundary values)
b) other models
Legend
The following abbreviations are used in the document to indicate/resolve IDs:
doi: DOI / https://dx.doi.org/
sw: swMATH / https://swmath.org/software/
wikidata: https://www.wikidata.org/wiki/