A globally and superlinearly convergent trust region method for \(LC^1\) optimization problems
From MaRDI portal
Publication:5936233
DOI10.1007/s11766-001-0039-6zbMath0980.90091MaRDI QIDQ5936233
Publication date: 10 July 2001
Published in: Applied Mathematics. Series B (English Edition) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11766-001-0039-6
90C30: Nonlinear programming
90C33: Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming)
Cites Work
- Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data
- Superlinearly convergent approximate Newton methods for LC\(^ 1\) optimization problems
- A trust region algorithm for minimization of locally Lipschitzian functions
- Computational schemes for large-scale problems in extended linear- quadratic programming
- A globally convergent Newton method for convex \(SC^ 1\) minimization problems
- A nonsmooth version of Newton's method
- Globally and superlinearly convergent trust-region algorithm for convex \(SC^ 1\)-minimization problems and its application to stochastic programs
- Generalized Linear-Quadratic Problems of Deterministic and Stochastic Optimal Control in Discrete Time
- On second-order sufficient optimality conditions for c 1,1-optimization problems
- Semismooth and Semiconvex Functions in Constrained Optimization
- Generalized second-order directional derivatives and optimization with C1,1 functions