Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
From MaRDI portal
Publication:5000647
Abstract: We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for SQP-type algorithms.
Recommendations
- Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
- Constrained nonconvex nonsmooth optimization via proximal bundle method
- Penalization in non-classical convex programming via variational convergence
Cites work
- scientific article; zbMATH DE number 3934328 (Why is no real title available?)
- scientific article; zbMATH DE number 4093168 (Why is no real title available?)
- scientific article; zbMATH DE number 3672000 (Why is no real title available?)
- scientific article; zbMATH DE number 3791104 (Why is no real title available?)
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- A Global Convergence Theory for Dennis, El-Alem, and Maciel's Class of Trust-Region Algorithms for Constrained Optimization without Assuming Regularity
- A Robust Algorithm for Optimization with General Equality and Inequality Constraints
- A Robust Primal-Dual Interior-Point Algorithm for Nonlinear Programs
- A Robust Trust Region Method for Constrained Nonlinear Programming Problems
- A Sequential Quadratic Programming Method Without A Penalty Function or a Filter for Nonlinear Equality Constrained Optimization
- A class of globally convergent optimization methods based on conservative convex separable approximations
- A moving balls approximation method for a class of smooth constrained minimization problems
- A robust sequential quadratic programming method
- A sequential parametric convex approximation method with applications to nonconvex truss topology design problems
- A sequential quadratic programming method for potentially infeasible mathematical programs
- A unified convergence analysis of block successive minimization methods for nonsmooth optimization
- An extended sequential quadratically constrained quadratic programming algorithm for nonlinear, semidefinite, and second-order cone programming
- Asynchronous Optimization Over Graphs: Linear Convergence Under Error Bound Conditions
- Black-Box Complexity of Local Minimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convex optimization algorithms
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Cubic regularization of Newton method and its global performance
- Decomposition by Partial Linearization: Parallel Optimization of Multi-Agent Systems
- Epi-convergent smoothing with applications to convex composite functions
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Exact Penalty Functions in Constrained Optimization
- Feasible methods for nonconvex nonsmooth problems with applications in green communications
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Hölder continuity of solutions to a parametric variational inequality
- Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming
- Lipschitzian properties of multifunctions
- Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- Non-Convex Distributed Optimization
- On High-order Model Regularization for Constrained Optimization
- On Nonconvex Decentralized Gradient Descent
- On the Sequential Quadratically Constrained Quadratic Programming Methods
- On the convergence of a new trust region algorithm
- On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming
- On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization
- On the exactness of a class of nondifferentiable penalty functions
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Optimization methods for large-scale machine learning
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning
- Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
- Robust Stochastic Approximation Approach to Stochastic Programming
- Robust recursive quadratic programming algorithm model with global and superlinear convergence properties
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Stochastic model-based minimization of weakly convex functions
- Stochastic subgradient method converges on tame functions
- Training highly multiclass classifiers
- Variational Analysis
- Variations and extension of the convex-concave procedure
Cited in
(7)- A bilevel approach to ESG multi-portfolio selection
- Stochastic optimization over proximally smooth sets
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
- Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case
- Convergence rate for diminishing stepsize methods in nonconvex constrained optimization via ghost penalties
- Level constrained first order methods for function constrained optimization
- Combining approximation and exact penalty in hierarchical programming
This page was built for publication: Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5000647)