A class of superlinearly convergent projection algorithms with relaxed stepsizes (Q1057626)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | A class of superlinearly convergent projection algorithms with relaxed stepsizes |
scientific article |
Statements
A class of superlinearly convergent projection algorithms with relaxed stepsizes (English)
0 references
1984
0 references
Let \(f: {\mathbb{R}}^ n\to {\mathbb{R}}\) and \(g: {\mathbb{R}}^ n\to {\mathbb{R}}^ m\) be differentiable functions and let \(D=\{x\in {\mathbb{R}}^ n| g(x)\geq 0\}\) be a convex set. Consider the problem: min\(\{\) f(x)\(| x\in D\}\). This typical constrained optimization problem is considered by the author, who investigates, in this work, a quasi-Newton extension of the Goldstein- Levitin-Polyak projected gradient algorithm. This extension projects an unconstrained descent step on to D, the feasible region and is closed to the work of \textit{J. C. Allwright} [J. Optimization Theory Appl. 30, 1-18 (1980; Zbl 0393.90069)]. In the algorithm presented here, the determination of the stepsize is divided into two stages. The first one determines the lengths of the unconstrained step and the second stage is the determination of the stepsize from the range [0,1] that shortens the projected step, if the projection on D does not reduce the objective function f. In the first stage, the objective function f decreases and is bounded by a conventional linear functional, while the second stage uses a quadratic functional as a bound. We remark that the well known Goldstein-Levitin-Polyak algorithm is relaxed here and so the quadratic subproblem involved, becomes simple.
0 references
relaxed stepsizes
0 references
superlinear convergence
0 references
constrained optimization
0 references
quasi-Newton
0 references
Goldstein-Levitin-Polyak projected gradient algorithm
0 references
quadratic subproblem
0 references
0 references
0 references
0 references
0 references