Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity

From MaRDI portal
Publication:5000647




Abstract: We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for SQP-type algorithms.



Cites work



Describes a project that uses

Uses Software





This page was built for publication: Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5000647)