Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization
DOI10.1080/10556788.2016.1225211zbMATH Open1368.49057OpenAlexW2525623505MaRDI QIDQ5268936FDOQ5268936
Publication date: 21 June 2017
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2016.1225211
convergenceeigenvaluesconjugacy conditionunconstrained optimizationsingular valuesWolfe conditionssufficient descent conditionconjugate gradient algorithms
Large-scale problems in mathematical programming (90C06) Newton-type methods (49M15) Numerical methods for variational inequalities and related problems (65K15) Variational methods for eigenvalues of operators (49R05)
Cites Work
- Algorithm 851
- Title not available (Why is that?)
- Title not available (Why is that?)
- Function minimization by conjugate gradients
- Linear and nonlinear programming.
- Acceleration of conjugate gradient algorithms for unconstrained optimization
- Numerical study of a relaxed variational problem from optimal design
- On a problem of the theory of lubrication governed by a variational inequality
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- Some descent three-term conjugate gradient methods and their global convergence
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- The conjugate gradient method in extremal problems
- Methods of conjugate gradients for solving linear systems
- New conjugacy conditions and related nonlinear conjugate gradient methods
- Title not available (Why is that?)
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- On restart procedures for the conjugate gradient method
- The rate of convergence of conjugate gradients
- The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices
- Convergence Properties of Nonlinear Conjugate Gradient Methods
- On the rate of convergence of the preconditioned conjugate gradient method
- On the real convergence rate of the conjugate gradient method
- Conjugate Gradient Methods with Inexact Searches
- A class of iterative methods for finite element equations
- Some Superlinear Convergence Results for the Conjugate Gradient Method
- New convergence results and preconditioning strategies for the conjugate gradient method
- Block splittings for the conjugate gradient method
- Über einige Methoden der Relaxationsrechnung
Cited In (9)
- A new self-scaling memoryless quasi-Newton update for unconstrained optimization
- A double parameter scaled BFGS method for unconstrained optimization
- A double parameter self-scaling memoryless BFGS method for unconstrained optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Accelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS update
- A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization
- A diagonal quasi-Newton updating method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization
- New conjugate gradient algorithms based on self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno method
Uses Software
This page was built for publication: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5268936)