Linearly convergent bilevel optimization with single-step inner methods

From MaRDI portal
Publication:6155063




Abstract: We propose a new approach to solving bilevel optimization problems, intermediate between solving full-system optimality conditions with a Newton-type approach, and treating the inner problem as an implicit function. The overall idea is to solve the full-system optimality conditions, but to precondition them to alternate between taking steps of simple conventional methods for the inner problem, the adjoint equation, and the outer problem. While the inner objective has to be smooth, the outer objective may be nonsmooth subject to a prox-contractivity condition. We prove linear convergence of the approach for combinations of gradient descent and forward-backward splitting with exact and inexact solution of the adjoint equation. We demonstrate good performance on learning the regularization parameter for anisotropic total variation image denoising, and the convolution kernel for image deconvolution.



Cites work







This page was built for publication: Linearly convergent bilevel optimization with single-step inner methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6155063)