Estimation of spectral bounds in gradient algorithms
From MaRDI portal
Publication:385581
DOI10.1007/s10440-012-9794-zzbMath1281.65060OpenAlexW1989957966MaRDI QIDQ385581
Elena Bukina, Luc Pronzato, Anatoly A. Zhigljavsky
Publication date: 2 December 2013
Published in: Acta Applicandae Mathematicae (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10440-012-9794-z
convergenceFibonacci numbersconjugate gradientgradient algorithmsarcsine distributionestimation of leading eigenvaluesminimization of convex quadratic functionsspectral bounds
Related Items
Delayed Gradient Methods for Symmetric and Positive Definite Linear Systems, On the steplength selection in gradient methods for unconstrained optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost
- Gradient algorithms for quadratic optimization with fast convergence rates
- The block preconditioned conjugate gradient method on vector computers
- A stable Richardson iteration method for complex linear systems
- An optimized iterative method for numerical solution of large systems of equations based on the extremal property of zeroes of Chebyshev polynomials
- Polynomial approximation of functions of matrices and applications
- Asymptotic behaviour of a family of gradient algorithms in \(\mathbb R^{ d }\) and Hilbert spaces
- On the asymptotic directions of the s-dimensional optimum gradient method
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm
- Iterative Krylov Methods for Large Linear Systems
- Practical Use of Polynomial Preconditionings for the Conjugate Gradient Method
- Two-Point Step Size Gradient Methods
- Solution of Sparse Indefinite Systems of Linear Equations
- Methods of conjugate gradients for solving linear systems
- Renormalised steepest descent in Hilbert space converges to a two-point attractor.