Linearly convergent bilevel optimization with single-step inner methods

From MaRDI portal
Publication:6155063

DOI10.1007/S10589-023-00527-7arXiv2205.04862OpenAlexW4387131582MaRDI QIDQ6155063FDOQ6155063


Authors: Ensio Suonperä, Tuomo Valkonen Edit this on Wikidata


Publication date: 16 February 2024

Published in: Computational Optimization and Applications (Search for Journal in Brave)

Abstract: We propose a new approach to solving bilevel optimization problems, intermediate between solving full-system optimality conditions with a Newton-type approach, and treating the inner problem as an implicit function. The overall idea is to solve the full-system optimality conditions, but to precondition them to alternate between taking steps of simple conventional methods for the inner problem, the adjoint equation, and the outer problem. While the inner objective has to be smooth, the outer objective may be nonsmooth subject to a prox-contractivity condition. We prove linear convergence of the approach for combinations of gradient descent and forward-backward splitting with exact and inexact solution of the adjoint equation. We demonstrate good performance on learning the regularization parameter for anisotropic total variation image denoising, and the convolution kernel for image deconvolution.


Full work available at URL: https://arxiv.org/abs/2205.04862







Cites Work


Cited In (1)





This page was built for publication: Linearly convergent bilevel optimization with single-step inner methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6155063)