Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data (Q795323)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data |
scientific article |
Statements
Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data (English)
0 references
1984
0 references
This paper has two objectives: 1. Use Clarke's generalized derivative concept to generalize the ''Hessian'' concept to C(1,1) functions; that is, to functions with locally Lipschitz gradients. For example, if f is \(C^ 2\), then \(\{\max(f,0)\}^ 2\) is C(1,1). 2. Use the properties of generalized Hessian including Taylor expansion, to derive second order optimality conditions for mathematical programming problems with nonlinear constraints and C(1,1) data. The following two theorems are of interest: 1. Let \({\mathcal O}\) be a nonempty open subset of \(R^ n\). Let \(f\in C(1,1)({\mathcal O})\) and [a,b]\(\subset {\mathcal O}\). Then, there is \(c\in(a,b)\) and \(M_ c\in \partial^ 2f(c)\) such that \(f(b)=f(a)+<\nabla f(a),b-a>+{1\over2}<M_ c(b-a),b-a>.\) Here, \(\partial^ 2f(c)\) denotes the convex hull of the set of all limits of the form lim \(\nabla^ 2f(x_ i)\), for all possible sequences \(\{x_ i\}\) converging to c and for which f is twice differentiable and \(\nabla^ 2f(x_ i)\) is meaningful. 2. Let \(x_ 0\) be a local minimum of the constrained problem (c): Minimize f(x) subject to \(g_ i(x)\leq 0\), \(i=1,...,m\) and \(h_ j(x)=0\), \(j=1,...,n\); where f, \(g_ i\), and \(h_ j\) are all C(1,1). Let \(G(\lambda)=\{x| g_ i(x)=0\) if \(\lambda_ i>0\), \(g_ i(x)\leq 0\) if \(\lambda_ i=0\) and \(h_ j=0\) for all \(j\}\) and let \(T_{\lambda}\) be the tangent cone to \(G(\lambda)\) at \(x_ 0\). Let L denote the Lagrangian \(L(x,\lambda,u)=f(x)+\sum \lambda_ ig_ i(x)+\sum \mu_ jh_ j(x)\) and let \(\partial^ 2_{xx}L(x_ 0,\lambda,\mu)\) denote the generalized Hessian of L at \(x_ 0\). Then, for each Kuhn-Tucker multiplier (\(\lambda\),\(\mu)\) and for each \(d\in T_{\lambda}\), there is a matrix \(A\in \partial^ 2_{xx}L(x_ 0,\lambda,\mu)\) such that \(<Ad,d>\geq 0\).
0 references
Clarke's generalized derivative
0 references
functions with locally Lipschitz gradients
0 references
generalized Hessian
0 references
second order optimality conditions
0 references
nonlinear constraints
0 references
0 references