On the convergence of proximal gradient methods for convex simple bilevel optimization

From MaRDI portal
Publication:6509969

arXiv2305.03559MaRDI QIDQ6509969FDOQ6509969


Authors: Puya Latafat, Andreas Themelis, Silvia Villa, Panagiotis Patrinos Edit this on Wikidata



Abstract: Bilevel optimization is a comprehensive framework that bridges single- and multi-objective optimization. It encompasses many general formulations, including, but not limited to, standard nonlinear programs. This work demonstrates how elementary proximal gradient iterations can be used to solve a wide class of convex bilevel optimization problems without involving subroutines. Compared to and improving upon existing methods, ours (1) can handle a wider class of problems, including nonsmooth terms in the upper and lower level problems, (2) does not require strong convexity or global Lipschitz gradient continuity assumptions, and (3) provides a systematic adaptive stepsize selection strategy, allowing for the use of large stepsizes while being insensitive to the choice of parameters.




Has companion code repository: https://github.com/pylat/adaptive-bilevel-optimization









This page was built for publication: On the convergence of proximal gradient methods for convex simple bilevel optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6509969)