On the truncated conjugate gradient method (Q1575075)

From MaRDI portal
Revision as of 17:42, 21 March 2024 by Openalex240321050300 (talk | contribs) (Set OpenAlex properties.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
On the truncated conjugate gradient method
scientific article

    Statements

    On the truncated conjugate gradient method (English)
    0 references
    9 November 2000
    0 references
    Trust region algorithms for the unconstrained optimization problem \(\min_{x\in\mathbb{R}^n} f(x)\), where the function \(f(x)\) is continuously differentiable, often need to solve the following subproblem \[ \min_{d\in\mathbb{R}^n} g^Td+ d^TBd/2\quad\text{subject to }\|d\|\leq \Delta, \] where \(\Delta> 0\) is a trust region bound, \(g\in \mathbb{R}^n\) is the gradient of the objective function \(f(x)\) at the current iterate, and \(B\in\mathbb{R}^{n\times n}\) symmetric is an approximation to the Hessian of \(f(x)\). At each iteration of a trust region algorithm, the subproblem has to be solved exactly or inexactly to obtain a trial step. The author shows in the paper that if \(B\) is positive definite, the function reduction obtained by truncated conjugate gradient method is at least half of the reduction obtained by exact solution. Numerical tests are not given.
    0 references
    0 references
    numerical examples
    0 references
    unconstrained optimization
    0 references
    truncated conjugate gradient method
    0 references
    trust region algorithm
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references