Geometrical inverse matrix approximation for least-squares problems and acceleration strategies
From MaRDI portal
Publication:827072
DOI10.1007/S11075-019-00862-ZzbMATH Open1456.65027arXiv1902.08388OpenAlexW3000692571MaRDI QIDQ827072FDOQ827072
Publication date: 6 January 2021
Published in: Numerical Algorithms (Search for Journal in Brave)
Abstract: We extend the geometrical inverse approximation approach for solving linear least-squares problems. For that we focus on the minimization of , where is a given rectangular coefficient matrix and is the approximate inverse. In particular, we adapt the recently published simplified gradient-type iterative scheme MinCos to the least-squares scenario. In addition, we combine the generated convergent sequence of matrices with well-known acceleration strategies based on recently developed matrix extrapolation methods, and also with some deterministic and heuristic acceleration schemes which are based on affecting, in a convenient way, the steplength at each iteration. A set of numerical experiments, including large-scale problems, are presented to illustrate the performance of the different accelerations strategies.
Full work available at URL: https://arxiv.org/abs/1902.08388
Recommendations
- Generalized approximate inverse preconditioners for least squares problems
- A geometric Gauss-Newton method for least squares inverse eigenvalue problems
- Sur le problème des moindres carrés
- The least-squares solution of inverse problem over special matrices and its optimal approximation
- Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems
Ill-posedness and regularization problems in numerical linear algebra (65F22) Inverse problems in linear algebra (15A29)
Cites Work
- Damped Anderson Acceleration With Restarts and Monotonicity Control for Accelerating EM and EM-like Algorithms
- Anderson acceleration of the alternating projections method for computing the nearest correlation matrix
- Anderson Acceleration for Fixed-Point Iterations
- Iterative Procedures for Nonlinear Integral Equations
- New adaptive stepsize selections in gradient methods
- On spectral properties of steepest descent methods
- Extrapolation methods theory and practice
- Vector extrapolation methods. Applications and numerical comparison
- Matrix polynomial and epsilon-type extrapolation methods with applications
- Block extrapolation methods with applications
- The Simplified Topological $\varepsilon$-Algorithms for Accelerating Sequences in a Vector Space
- Gradient methods with adaptive step-sizes
- Approximate Inverse Techniques for Block-Partitioned Matrices
- Convergence acceleration during the 20th century
- An Element-Based Spectrally Optimized Approximate Inverse Preconditioner for the Euler Equations
- The epsilon algorithm and related topics
- Matrix differential equations and inverse preconditioners
- On the steplength selection in gradient methods for unconstrained optimization
- Preconditioned conjugate gradient method for finding minimal energy surfaces on Powell-Sabin triangulations
- A derivative-free nonmonotone line-search technique for unconstrained optimization
- Approximate inverse preconditioners for some large dense random electrostatic interaction matrices
- An analysis of sparse approximate inverse preconditioners for boundary integral equations
- Geometrical properties of the Frobenius condition number for positive definite matrices
- On the geometrical structure of symmetric matrices
- Optimal regularized low rank inverse approximation
- Sparse Approximate-Inverse Preconditioners Using Norm-Minimization Techniques
- Geometrical inverse preconditioning for symmetric positive definite matrices
- The simplified topological \(\varepsilon\)-algorithms: software and applications
- Approximate inverse computation using Frobenius inner product
- Résultats négatifs en accélération de la convergence
- Rational approximation to the Fermi-Dirac function with applications in density functional theory
- Sparse approximations of matrix functions via numerical integration of ODEs
- Convergence and acceleration properties for the vector \(\epsilon\)- algorithm
- The genesis and early developments of Aitken's process, Shanks' transformation, the \(\varepsilon\)-algorithm, and related fixed point methods
- Shanks Sequence Transformations and Anderson Acceleration
- Generalized approximate inverse preconditioners for least squares problems
- The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation
- Enriched methods for large-scale unconstrained optimization
Cited In (4)
- A geometric theory for preconditioned inverse iteration. I: Extrema of Rayleigh quotient
- A new geometric acceleration of the von Neumann-Halperin projection method
- A geometric Gauss-Newton method for least squares inverse eigenvalue problems
- An accelerated tensorial double proximal gradient method for total variation regularization problem
Uses Software
This page was built for publication: Geometrical inverse matrix approximation for least-squares problems and acceleration strategies
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q827072)