On the complexity of finding first-order critical points in constrained nonlinear optimization
DOI10.1007/S10107-012-0617-9zbMATH Open1301.68154DBLPjournals/mp/CartisGT14OpenAlexW2097311548WikidataQ58185701 ScholiaQ58185701MaRDI QIDQ2452373FDOQ2452373
Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
Publication date: 2 June 2014
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: http://purl.org/net/epubs/manifestation/6466/RAL-TR-2011-008.pdf
Recommendations
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Worst case complexity of direct search
- On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- On the Convergence of Successive Linear-Quadratic Programming Algorithms
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Cubic regularization of Newton method and its global performance
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Black-Box Complexity of Local Minimization
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
Cited In (31)
- Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Globally convergent homotopy algorithm for solving the KKT systems to the principal-agent bilevel programming
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- First-order methods for problems with \(O(1)\) functional constraints can have almost the same convergence rate as for unconstrained problems
- Direct search based on probabilistic feasible descent for bound and linearly constrained problems
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Title not available (Why is that?)
- Stochastic optimization over proximally smooth sets
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Level constrained first order methods for function constrained optimization
- The multiproximal linearization method for convex composite problems
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
- On the complexity of an inexact restoration method for constrained optimization
- On the complexity of solving feasibility problems with regularized models
- A trust region method for finding second-order stationarity in linearly constrained nonconvex optimization
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
- On high-order model regularization for multiobjective optimization
- Worst-case complexity of an SQP method for nonlinear equality constrained stochastic optimization
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
This page was built for publication: On the complexity of finding first-order critical points in constrained nonlinear optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2452373)