Convergence of descent method without line search (Q2570691)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Convergence of descent method without line search |
scientific article |
Statements
Convergence of descent method without line search (English)
0 references
28 October 2005
0 references
Descent methods of inexact line search type, \(x_{k+1}:= x_k+ \alpha_k d_k\), are considered, where \(d_k\) is the descent direction and \(\alpha_k\) the step size coefficient. No one-dimensional subproblem need to be solved, but explicit formulas for \(\alpha_k\) are presented. Two methods are discussed. The first is applicable if the gradient, \(g\) of the objective \(f\), is Lipschitz and if the Lipschitz constant, \(L\), is easy to estimate. Then the coefficient has the form \(\alpha_k= -g(x_k)^T d_k/(L_k\| d_k\|^2)\) where \(L_k\) are approximations of \(L\) that play a role for the convergence proofs. The second method is applicable if the second derivative of \(f\) is bounded by some constant, say \(M\), and if \(M\) is easy to estimate. In this case, \(\alpha_k= -g(x_k)^T d_k/(M_k\| d_k\|^2)\) where \(M_k\) are approximations of \(M\) that play a role for the convergence proofs. Global convergence as well as linear convergence order of both methods is shown.
0 references
Unconstrained optimization
0 references
Descent method
0 references
Line search
0 references
global convergence
0 references
linear convergence
0 references
0 references
0 references