COMPARING PARTITIONED COUPLING VS. MONOLITHIC BLOCK PRECONDITIONING FOR SOLVING STOKES-DARCY SYSTEMS

From MaRDI portal
Revision as of 09:55, 9 June 2022 by T4 (talk | contribs) (Variables)

COMPARING PARTITIONED COUPLING VS. MONOLITHIC BLOCK PRECONDITIONING FOR SOLVING STOKES-DARCY SYSTEMS

PID (if applicable): arxiv:2108.13229

Problem Statement

Instationary, coupled Stokes-Darcy two-domain problem: coupled systems of free flow adjacent to permeable (porous) media Two approaches ( partioned coupling and monolitihic block preconditioning) are compared against each other and with the direct solving.

Object of Research and Objective

Analysis of the runtime and memory behavior of the two methods " partioned coupling" and "monolitihic block preconditioning" in comparison to the direct solver. The motivation for this is to avoid problems when using simple direct solvers (sparse direct solvers for the (linearized) subproblems): bad parallel scaling, untrustworthy solution with bad conditioning.

Procedure

Perform three types of solving a Darcy-Stokes system, namely Partioned Coupling, Monolithic Block Preconditioning, and Direct Solving, with subsequent comparison of runtime and memory behavior.

Involved Disciplines

Environmental Systems, Mathematics

Data Streams

Model

Stokes flow in the free-flow domain Darcy’s law for the porous domain

Discretization

  • Time: first-order backward Euler scheme
  • Space: finite volumes
    • Porous Domain (Darcy): mit two-point flux approximation for pressure
    • Free Flow domain (Stokes): staggered grid for pressure and velocity, upwind scheme for approximation of fluxes

Variables

Name Unit Symbol
Pressure (Dirichlet pressure) - p
Velocity (Neumann velocity) - v

Process Informationen

Process Steps

Name Description Input Output Method Parameter Environment Mathematical Area
Partioned Coupling Solving Numerical Solution of the Problem with Partioned Coupling LGS; $S^{ff}$: $v^{pm}*n$; $S^{pm}$: $p^{ff}$ LGS_solved_cp, runtime_cp Solver (UMFPACK, PD-GMRS, Bi-CGSTAB), Preconditioner (Uzawa, AMG), Coupling (preCICE, Picard iteration; inverse least-squares interface quasi-Newton) DuMux, preCICE, DUNE Numerical Mathematics
Monolithic Block-Preconditioning Solving Solving with Preconditioner+Solver LGS LGS_solved_mlbp, runtime_mlbp Solver: PD-GMRES, Preconditioner: AMG, Uzawa, ILU(0), Block-Jacobi, Block-Gauss-Seidel DuMux Numerical Mathematics
Direct Solving Solving with Preconditioner+Solver LGS LGS_solved_direct, runtime_direct Solver: UMFPACK DuMux
Comparison Comparison of the three solution approaches with regard to runtime, scalability and memory requirements runtime_cp, runtime_mlbp, runtime_direct Graph comparison of the runtime, Paper Degrees of Freedom Numerical Mathematics

Applied Methods

ID Name Process Step Parameter implemented by
wikidata:Q7001954 Dirichlet-Neumann coupling Coupling
wikidata:Q1683631 Picard iteration mit fixed-point iteration Coupling
wikidata:Q25098909 inverse least-squares interface quasi-Newton Coupling
wikidata:Q1069090 block-Gauss-Seidel method Preconditioner
wikidata:Q2467290 Umfpack Solver
wikidata:Q56564057 PD-GMRES Solver k (subiteration parameter, determined automatically), tolerance: relative residual...
wikidata:Q56560244 Bi-CGSTAB Solver tolerance: relative residual...
wikidata:Q1471828 AMG method Preconditioner
wikidata: Q17144437 Uzawa-iterations Preconditioner
wikidata:Q1654069 ILU(0) factorization Preconditioner

Software used

ID Name Description Version Programming Language Dependencies versioned published documented
sw:8713 preCICE Library for coupling simulations v2104.0 Core library in C++; Bindings for Fortran, Python, C, Matlab; Adaptors dependant on simulation code (Fortran, Python, C, Matlab) Linux, Boost, MPI, ... https://github.com/precice/precice doi:10.18419/darus-2125 https://precice.org/docs.html
sw:14293 DuMux DUNE for Multi-{Phase, Component, Scale, Physics, …} flow and transport in porous media C++, python-bindings, utility-skripts in python Linux, DuNE (C++ Framework), cmake (module chains), package-config, compiler, build-essentials, dpg https://git.iws.uni-stuttgart.de/dumux-repositories/dumux https://zenodo.org/record/5152939#.YQva944zY2w https://dumux.org/docs/
sw:18749 ISTL Iterative Solver Template Library” (ISTL) which is part of the “Distributed and Unified Numerics Environment” (DUNE). C++ Linux, DuNE (C++ Framework) https://gitlab.dune-project.org/core/dune-istl doi:10.1007/978-3-540-75755-9_82 https://www.dune-project.org/modules/dune-istl/

Hardware

ID Name Processor Compiler #Nodes #Cores
AMD EPYC 7551P CPU 1 1

Input Data

ID Name Size Data Structure Format Representation Format Exchange binary/text proprietary to publish to archive
LGS O($2*10^6$) Matrix size Data structure in DUNE/DuMux numbers open ? ?

Output Data

ID Name Size Data Structure Format Representation Format Exchange binary/text proprietary to publish to archive
Runtime comparison of the direct solver Umfpack and iterative solvers with partitioned coupling and block preconditioning small statistics numbers ? ?
Runtime comparison of the best performing solver configurations small statistics numbers ? ?
arxiv:2108.13229 Paper O(KB) pdf text open yes yes

Reproducibility

Mathematical Reproducibility

yes, by all parameters

Runtime Reproducibility

yes, for same input samples

Reproducibility of Results

Due to floating point arithmetic, no bitwise reproducibility

Reproducibility on original Hardware

Reproducibility on other Hardware

a) Serial Computation

b) Parallel Computation

Transferability to

a) similar model parameters (other initial and boundary values)

b) other models

Legend

The following abbreviations are used in the document to indicate/resolve IDs:

doi: DOI / https://dx.doi.org/

sw: swMATH / https://swmath.org/software/

wikidata: https://www.wikidata.org/wiki/