Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
DOI10.1287/moor.2020.1079zbMath1471.90137arXiv1709.03384OpenAlexW3128430476MaRDI QIDQ5000647
Lorenzo Lampariello, Gesualdo Scutari, Francisco Facchinei, Vyacheslav Kungurtsev
Publication date: 15 July 2021
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.03384
constrained optimizationnonconvex problemiteration complexitydiminishing stepsizegeneralized stationary point
Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30)
Related Items (3)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- A sequential parametric convex approximation method with applications to nonconvex truss topology design problems
- Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences
- On the exactness of a class of nondifferentiable penalty functions
- Robust recursive quadratic programming algorithm model with global and superlinear convergence properties
- A robust sequential quadratic programming method
- Hölder continuity of solutions to a parametric variational inequality
- On the convergence of a new trust region algorithm
- An extended sequential quadratically constrained quadratic programming algorithm for nonlinear, semidefinite, and second-order cone programming
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Stochastic subgradient method converges on tame functions
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Variations and extension of the convex-concave procedure
- Feasible methods for nonconvex nonsmooth problems with applications in green communications
- Cubic regularization of Newton method and its global performance
- A sequential quadratic programming method for potentially infeasible mathematical programs
- A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- Majorization-Minimization Procedures and Convergence of SQP Methods for Semi-Algebraic and Tame Programs
- A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization
- Epi-convergent Smoothing with Applications to Convex Composite Functions
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- A Moving Balls Approximation Method for a Class of Smooth Constrained Minimization Problems
- A Sequential Quadratic Programming Method Without A Penalty Function or a Filter for Nonlinear Equality Constrained Optimization
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Robust Stochastic Approximation Approach to Stochastic Programming
- Lipschitzian properties of multifunctions
- Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming
- A Robust Trust Region Method for Constrained Nonlinear Programming Problems
- Variational Analysis
- A Robust Algorithm for Optimization with General Equality and Inequality Constraints
- Decomposition by Partial Linearization: Parallel Optimization of Multi-Agent Systems
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Non-Convex Distributed Optimization
- On High-order Model Regularization for Constrained Optimization
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning
- Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
- Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning
- On Nonconvex Decentralized Gradient Descent
- Optimization Methods for Large-Scale Machine Learning
- A Robust Primal-Dual Interior-Point Algorithm for Nonlinear Programs
- Black-Box Complexity of Local Minimization
- A Global Convergence Theory for Dennis, El-Alem, and Maciel's Class of Trust-Region Algorithms for Constrained Optimization without Assuming Regularity
- Exact Penalty Functions in Constrained Optimization
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Constrained Consensus and Optimization in Multi-Agent Networks
- Asynchronous Optimization Over Graphs: Linear Convergence Under Error Bound Conditions
- On the Sequential Quadratically Constrained Quadratic Programming Methods
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
This page was built for publication: Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity