Least-square acceleration of iterative methods for linear equations
From MaRDI portal
Publication:2264402
DOI10.1007/BF00933309zbMATH Open0272.65022MaRDI QIDQ2264402FDOQ2264402
Authors: Yanyan Li
Publication date: 1974
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Numerical solutions to overdetermined systems, pseudoinverses (65F20) Iterative numerical methods for linear systems (65F10)
Cites Work
Cited In (17)
- Extrapolation and prediction of sequences in a vector space
- Computational methods of linear algebra
- Hybrid vector transformations
- Méthodes de projection-minimisation pour les problèmes linéaires
- Some results about vector extrapolation methods and related fixed-point iterations
- Optimal simultaneous maximuma posterioriestimation of states, noise statistics and parameters I. Algorithm
- Optimal simultaneous maximuma posterioriestimation of states, noise statistics and parameters II. Numerical performance
- A survey of Shanks' extrapolation methods and their applications
- A convergence study for reduced rank extrapolation on nonlinear systems
- Application of vector extrapolation methods to consistent singular linear systems
- Quasilinear vector extrapolation methods
- Convergence and stability analyses for some vector extrapolation methods in the presence of defective iteration matrices
- A second-order sparse factorization method for Poisson's equation with mixed boundary conditions
- Nonlinear Schwarz iterations with reduced rank extrapolation
- The genesis and early developments of Aitken's process, Shanks' transformation, the \(\varepsilon\)-algorithm, and related fixed point methods
- Shanks sequence transformations and Anderson acceleration
- Minimal polynomial and reduced rank extrapolation methods are related
This page was built for publication: Least-square acceleration of iterative methods for linear equations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2264402)