Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
From MaRDI portal
Publication:1995981
DOI10.1007/s10915-021-01409-yOpenAlexW3127909534MaRDI QIDQ1995981
Publication date: 2 March 2021
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.00131
nonconvex optimizationcomplexity analysisNewton-conjugate-gradientoptimization with nonlinear equality constraintsproximal augmented Lagrangian
Analysis of algorithms and problem complexity (68Q25) Large-scale problems in mathematical programming (90C06) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30)
Related Items
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization, MINRES: From Negative Curvature Detection to Monotonicity Properties, Complexity analysis of interior-point methods for second-order stationary points of nonlinear semidefinite optimization problems, Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization, A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees, A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An adaptive augmented Lagrangian method for large-scale constrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Complexity bounds for second-order optimality in unconstrained optimization
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Global minimization using an augmented Lagrangian method with variable lower-level constraints
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Second-order negative-curvature methods for box-constrained and general constrained optimization
- Perturbed proximal primal-dual algorithm for nonconvex nonsmooth optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Multiplier and gradient methods
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- A Cone-Continuity Constraint Qualification and Algorithmic Consequences
- On Augmented Lagrangian Methods with General Lower-Level Constraints
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- A Sequential Optimality Condition Related to the Quasi-normality Constraint Qualification and Its Algorithmic Consequences
- Erratum “A second-order sequential optimality condition associated to the convergence of optimization algorithms”
- Convergence Properties of a Second Order Augmented Lagrangian Method for Mathematical Programs with Complementarity Constraints
- A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees
- A Proximal Alternating Direction Method of Multiplier for Linearly Constrained Nonconvex Minimization
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- Complexity and performance of an Augmented Lagrangian algorithm