Global convergence of algorithms with nonmonotone line search strategy in unconstrained optimization (Q1411546)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Global convergence of algorithms with nonmonotone line search strategy in unconstrained optimization |
scientific article |
Statements
Global convergence of algorithms with nonmonotone line search strategy in unconstrained optimization (English)
0 references
29 October 2003
0 references
Let \(f: \mathbb{R}^{n}\rightarrow \mathbb{R}\) be a continuously differentiable function. For the unconstrained optimization problem \[ \min f(x), x \in \mathbb{R}^{n}, \] the author studies some methods generating an approximation sequence \((x_{k})\) by the iteration process \[ x_{k+1} =x_{k} + \alpha_{k}d_{k}, k=0,1\dots, \] with some given starting points \(x_{0}\in \mathbb{R}^{n}\). If the sequence \((f(x_{k}))\) is monotone, then the line search strategy is called monotone. The focus of this paper are nonmonotone line search strategies; the hypotheses do not force the sequence \((f(x_{k}))\) to be monotone. The author proves that the nonmonotone Wolfe line search strategy is \(M\)-efficient and that the Armijo line search strategy is semi-\(M\)-efficient. Using these results he shows the global convergence of the Broyden-Fletcher-Goldfarb-Shannon method with nonmonotone line search strategy.
0 references
nonmonotone line search strategy
0 references
BFGS method
0 references
global convergence
0 references
unconstrained optimization
0 references
Wolfe line search strategy
0 references
Armijo line search strategy
0 references
Broyden-Fletcher-Goldfarb-Shannon method
0 references
0 references
0 references
0 references