Gradient algorithms for quadratic optimization with fast convergence rates
From MaRDI portal
Publication:409265
DOI10.1007/S10589-010-9319-5zbMATH Open1262.90122OpenAlexW2077067982MaRDI QIDQ409265FDOQ409265
Publication date: 12 April 2012
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://orca.cardiff.ac.uk/15200/1/chebyshev-V3_revised.pdf
Cites Work
- Two-Point Step Size Gradient Methods
- Topics in Advanced Econometrics
- Title not available (Why is that?)
- On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method
- Title not available (Why is that?)
- A new gradient method with an optimal stepsize property
- On the asymptotic directions of the s-dimensional optimum gradient method
- Asymptotic behaviour of a family of gradient algorithms in \(\mathbb R^{ d }\) and Hilbert spaces
- Studying Convergence of Gradient Algorithms Via Optimal Experimental Design Theory
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm
- Renormalised steepest descent in Hilbert space converges to a two-point attractor.
- Title not available (Why is that?)
- Title not available (Why is that?)
Cited In (9)
- Delayed Gradient Methods for Symmetric and Positive Definite Linear Systems
- Steepest descent method with random step lengths
- A second-order gradient method for convex minimization
- On the steplength selection in gradient methods for unconstrained optimization
- Title not available (Why is that?)
- An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost
- A gradient method exploiting the two dimensional quadratic termination property
- Level set of the asymptotic rate of convergence for the method of steepest descent
- Estimation of spectral bounds in gradient algorithms
Recommendations
- An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost π π
- Studying convergence of gradient algorithms via optimal experimental design theory π π
- Asymptotic behaviour of a family of gradient algorithms in \(\mathbb R^{ d }\) and Hilbert spaces π π
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm π π
- Iterative minimization of quadratic functionals π π
This page was built for publication: Gradient algorithms for quadratic optimization with fast convergence rates
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q409265)