Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
DOI10.1287/MOOR.2020.1079zbMATH Open1471.90137arXiv1709.03384OpenAlexW3128430476MaRDI QIDQ5000647FDOQ5000647
Lorenzo Lampariello, Gesualdo Scutari, Francisco Facchinei, Vyacheslav Kungurtsev
Publication date: 15 July 2021
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.03384
Recommendations
- Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
- Constrained nonconvex nonsmooth optimization via proximal bundle method
- Penalization in non-classical convex programming via variational convergence
constrained optimizationiteration complexitynonconvex problemdiminishing stepsizegeneralized stationary point
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Variations and extension of the convex-concave procedure
- Variational Analysis
- A unified convergence analysis of block successive minimization methods for nonsmooth optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Robust Trust Region Method for Constrained Nonlinear Programming Problems
- Constrained Consensus and Optimization in Multi-Agent Networks
- Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences
- Hölder continuity of solutions to a parametric variational inequality
- A Sequential Quadratic Programming Method Without A Penalty Function or a Filter for Nonlinear Equality Constrained Optimization
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Decomposition by Partial Linearization: Parallel Optimization of Multi-Agent Systems
- Title not available (Why is that?)
- Exact Penalty Functions in Constrained Optimization
- Epi-convergent smoothing with applications to convex composite functions
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming
- A Moving Balls Approximation Method for a Class of Smooth Constrained Minimization Problems
- A sequential parametric convex approximation method with applications to nonconvex truss topology design problems
- A class of globally convergent optimization methods based on conservative convex separable approximations
- On the convergence of a new trust region algorithm
- A Global Convergence Theory for Dennis, El-Alem, and Maciel's Class of Trust-Region Algorithms for Constrained Optimization without Assuming Regularity
- Cubic regularization of Newton method and its global performance
- Lipschitzian properties of multifunctions
- A robust sequential quadratic programming method
- Title not available (Why is that?)
- On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization
- Title not available (Why is that?)
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- A Robust Primal-Dual Interior-Point Algorithm for Nonlinear Programs
- Black-Box Complexity of Local Minimization
- On the Sequential Quadratically Constrained Quadratic Programming Methods
- A Robust Algorithm for Optimization with General Equality and Inequality Constraints
- Robust recursive quadratic programming algorithm model with global and superlinear convergence properties
- Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning
- A sequential quadratic programming method for potentially infeasible mathematical programs
- On the exactness of a class of nondifferentiable penalty functions
- Title not available (Why is that?)
- Optimization Methods for Large-Scale Machine Learning
- An extended sequential quadratically constrained quadratic programming algorithm for nonlinear, semidefinite, and second-order cone programming
- Title not available (Why is that?)
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- On Nonconvex Decentralized Gradient Descent
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
- Non-Convex Distributed Optimization
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning
- On High-order Model Regularization for Constrained Optimization
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Stochastic subgradient method converges on tame functions
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Feasible methods for nonconvex nonsmooth problems with applications in green communications
- Asynchronous Optimization Over Graphs: Linear Convergence Under Error Bound Conditions
Cited In (6)
- Combining approximation and exact penalty in hierarchical programming
- A bilevel approach to ESG multi-portfolio selection
- Stochastic optimization over proximally smooth sets
- Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case
- Level constrained first order methods for function constrained optimization
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
Uses Software
This page was built for publication: Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5000647)