A successive quadratic programming method for a class of constrained nonsmooth optimization problems (Q1174457)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A successive quadratic programming method for a class of constrained nonsmooth optimization problems
scientific article

    Statements

    A successive quadratic programming method for a class of constrained nonsmooth optimization problems (English)
    0 references
    0 references
    0 references
    25 June 1992
    0 references
    This paper is a model of clarity and completeness. It describes an iterative procedure for approximating Kuhn-Tucker points in the following problem: \[ \hbox{minimize\;}f(x)+F(x)\hbox{ subject to\;}c_ i(x)=0,\;\;i=1,2,\dots,m.\leqno(P) \] Here \(F\) and \(c_ i\), \(i=1,2,\dots,m\), are twice-continuously differentiable functions carrying \(\mathbb{R}^ n\) into \(\mathbb{R}\), while \(f: \mathbb{R}^ n\to\mathbb{R}\) is a (not necessarily differentiable) convex function. Given positive tolerances \(\varepsilon_ 0\), \(\varepsilon_ 1\), and \(\varepsilon_ 2\), the algorithm takes finitely many steps to identify a point \(x\) in \(\mathbb{R}^ n\) and a multiplier vector u in \(\mathbb{R}^ m\) satisfying the following approximate Kuhn-Tucker conditions: \[ 0\in\partial_{\varepsilon_ 0}f(x)+\nabla F(x)+Dc(x)^ T u+\varepsilon_ 1 S^ n,\;\;0\in c(x)+\varepsilon_ 2 S^ m. \] Here \(S^ k\) denotes \(\{w\in\mathbb{R}^ k: | w|=1\}\). Starting from a point \(x_ 0\) in \(\mathbb{R}^ n\), the algorithm starts by formulating a linearly constrained, convex programming problem whose solution gives a promising search direction based at \(x_ 0\). This auxiliary problem is solved approximately using an iterative cutting-plane type approach. A penalty parameter used during line search is updated during this phase. Once the search direction has been found, and the penalty parameter identified, line search at a geometric rate is used to generate an improved approximation \(x_ 1\). Then the whole procedure begins again. The paper explains the details of each step, and provides carefully stated theorems with detailed proofs to assure termination after finitely many steps. The rate of convergence is not discussed explicitly, and there are no computational examples. However, the author claims that the algorithm is ``implementable'' in the sense that instead of requiring complete knowledge of the subgradients of the nonsmooth convex function \(f\), the procedure requires only one subgradient vector at each step.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    constrained nonsmooth optimization
    0 references
    exact penalization
    0 references
    approximating Kuhn-Tucker points
    0 references
    iterative cutting-plane type approach
    0 references
    line search
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references