Composite Anderson acceleration method with two window sizes and optimized damping
From MaRDI portal
Publication:6092230
DOI10.1002/NME.7096arXiv2203.14627OpenAlexW4291036689MaRDI QIDQ6092230FDOQ6092230
Authors: Kewang Chen, C. Vuik
Publication date: 23 November 2023
Published in: International Journal for Numerical Methods in Engineering (Search for Journal in Brave)
Abstract: In this paper, we propose and analyze a set of fully non-stationary Anderson acceleration algorithms with dynamic window sizes and optimized damping. Although Anderson acceleration (AA) has been used for decades to speed up nonlinear solvers in many applications, most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with fixed window size and a constant damping factor. The behavior and potential of the non-stationary version of Anderson acceleration methods remain an open question. Since most efficient linear solvers use composable algorithmic components. Similar ideas can be used for AA to solve nonlinear systems. Thus in the present work, to develop non-stationary Anderson acceleration algorithms, we first propose two systematic ways to dynamically alternate the window size by composition. One simple way to package sAA(m) with sAA(n) in each iteration is applying sAA(m) and sAA(n) separately and then average their results. It is an additive composite combination. The other more important way is the multiplicative composite combination, which means we apply sAA(m) in the outer loop and apply sAA(n) in the inner loop. By doing this, significant gains can be achieved. Secondly, to make AA to be a fully non-stationary algorithm, we need to combine these strategies with our recent work on the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD), which is another important direction of producing non-stationary AA and nice performance gains have been observed. Moreover, we also investigate the rate of convergence of these non-stationary AA methods under suitable assumptions. Finally, our numerical results show that some of these proposed non-stationary Anderson acceleration algorithms converge faster than the stationary sAA method and they may significantly reduce the storage and time to find the solution in many cases.
Full work available at URL: https://arxiv.org/abs/2203.14627
Recommendations
- Asymptotic convergence analysis and influence of initial guesses on composite Anderson acceleration
- Anderson acceleration for degenerate and nondegenerate problems
- Anderson acceleration for nonlinear finite volume scheme for advection-diffusion problems
- Convergence analysis for Anderson acceleration
- Anderson acceleration for fixed-point iterations
- Leveraging Anderson acceleration for improved convergence of iterative solutions to transport systems
- Acceleration of Euler and RANS solvers via selective frequency damping
- Two classes of multisecant methods for nonlinear acceleration
- Accelerating the HS-type Richardson iteration method with Anderson mixing
- Fast Convergence of Fast Multipole Acceleration Using Dual Basis Function in the Method of Moments for Composite Structures
Numerical linear algebra (65Fxx) Nonlinear algebraic or transcendental equations (65Hxx) Acceleration of convergence in numerical analysis (65Bxx)
Cites Work
- KSSOLV -- a MATLAB toolbox for solving the Kohn-Sham equations
- NITSOL: A Newton Iterative Solver for Nonlinear Systems
- Design and Application of a Gradient-Weighted Moving Finite Element Code I: in One Dimension
- A comparative study on methods for convergence acceleration of iterative vector sequences
- Two classes of multisecant methods for nonlinear acceleration
- Anderson acceleration for fixed-point iterations
- Iterative Procedures for Nonlinear Integral Equations
- GMRESR: a family of nested GMRES methods
- Convergence analysis for Anderson acceleration
- Krylov subspace acceleration for nonlinear multigrid schemes
- Continuation-Conjugate Gradient Methods for the Least Squares Solution of Nonlinear Boundary Value Problems
- Accelerating with rank-one updates
- On the similarities between the quasi-Newton inverse least squares method and GMRES
- Nonlinear Krylov and moving nodes in the method of lines
- Krylov Subspace Acceleration of Nonlinear Multigrid with Application to Recirculating Flows
- Composing scalable nonlinear algebraic solvers
- Elliptic preconditioner for accelerating the self-consistent field iteration in Kohn-Sham density functional theory
- Anderson-accelerated convergence of Picard iterations for incompressible Navier-Stokes equations
- A Proof That Anderson Acceleration Improves the Convergence Rate in Linearly Converging Fixed-Point Methods (But Not in Those Converging Quadratically)
- Solution of the discretized incompressible Navier-Stokes equations with the GMRES method
- Comments on: ``Anderson acceleration, mixing and extrapolation
- Solver composition across the PDE/linear algebra barrier
- On the asymptotic linear convergence speed of Anderson acceleration applied to ADMM
- Anderson acceleration for contractive and noncontractive operators
- Local improvement results for Anderson acceleration with inaccurate function evaluations
- Globally Convergent Type-I Anderson Acceleration for Nonsmooth Fixed-Point Iterations
- Anderson acceleration for a class of nonsmooth fixed-point problems
- On the asymptotic linear convergence speed of Anderson acceleration, Nesterov acceleration, and nonlinear GMRES
Cited In (1)
This page was built for publication: Composite Anderson acceleration method with two window sizes and optimized damping
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6092230)