An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost
From MaRDI portal
Publication:360475
DOI10.1007/S11590-012-0491-7zbMATH Open1276.90047OpenAlexW1994949122MaRDI QIDQ360475FDOQ360475
A. Zhigljavsky, Luc Pronzato, Elena Bukina
Publication date: 27 August 2013
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-012-0491-7
Recommendations
- Gradient algorithms for quadratic optimization with fast convergence rates
- Studying convergence of gradient algorithms via optimal experimental design theory
- New stepsizes for the gradient method
- Near optimal step size and momentum in gradient descent for quadratic functions
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm
conjugate gradientquadratic optimizationFibonacci numbersarcsine distributionestimation of leading eigenvaluesgradient algorithms
Cites Work
- Title not available (Why is that?)
- Two-Point Step Size Gradient Methods
- The block preconditioned conjugate gradient method on vector computers
- Efficient and reliable iterative methods for linear systems
- Asymptotic behaviour of a family of gradient algorithms in \(\mathbb R^{ d }\) and Hilbert spaces
- Studying Convergence of Gradient Algorithms Via Optimal Experimental Design Theory
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm
- Practical Use of Polynomial Preconditionings for the Conjugate Gradient Method
- Gradient algorithms for quadratic optimization with fast convergence rates
- Title not available (Why is that?)
Cited In (6)
- Performance analysis of greedy algorithms for minimising a maximum mean discrepancy
- Delayed Gradient Methods for Symmetric and Positive Definite Linear Systems
- On \(R\)-linear convergence analysis for a class of gradient methods
- Spectral Properties of Barzilai--Borwein Rules in Solving Singly Linearly Constrained Optimization Problems Subject to Lower and Upper Bounds
- On the steplength selection in gradient methods for unconstrained optimization
- Estimation of spectral bounds in gradient algorithms
This page was built for publication: An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q360475)