Newton's method for singular constrained optimization problems (Q1057627)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Newton's method for singular constrained optimization problems
scientific article

    Statements

    Newton's method for singular constrained optimization problems (English)
    0 references
    0 references
    1984
    0 references
    Let X be a Banach space and \(F: U\to {\mathbb{R}}^ a \)functional defined on a convex subset U. The constrained optimization problem is to find \(\xi\in U\) such that: \(F(\xi)\geq F(u)\), (\(\forall)u\in U\). If F is twice Fréchet-differentiable on U, we can replace at \(x\in U\), the objective F(x), by its quadratic Taylor expansion: \(Q_ x(u)=F''\!_ x(u- x)+F''\!_ x(u-x)(u-x).\) So the constrained problem relating to the objective F can be approximated at \(x\in U\) by solving the same problem for \(Q_ x\). Well, using this method one can estimate the rate of convergence. By example, if we suppose that there are \(m,M>0\), so that: (*) \(m\| y\|^ 2\leq F''_ x(y)(y)\leq M\| y\|^ 2\), X is a Hilbert space, U bounded and closed and if we use the Newton's method, then the rate of convergence is locally quadratic. The author investigates the convergence and the convergence rate for methods of Newton type to solve constrained optimization problems. The results presented here extend known theorems for the finite dimensional case. The author considers a class of minimization problems, which are singular at the optimal point, this is tantamount to the fact that (*) is violated and shows that the superlinear convergence rate is retained for a certain part of the iterations. Finally, the results are applied to problems with perturbed data.
    0 references
    0 references
    0 references
    singular problems
    0 references
    Banach space
    0 references
    constrained optimization
    0 references
    rate of convergence
    0 references
    superlinear convergence
    0 references
    0 references