Accelerated Forward-Backward Optimization using Deep Learning
From MaRDI portal
Publication:6367408
DOI10.1137/22M1532548arXiv2105.05210OpenAlexW3160815295MaRDI QIDQ6367408FDOQ6367408
Authors: Sebastian Banert, Jevgenija Rudzusika, Ozan Öktem, Jonas Adler
Publication date: 11 May 2021
Abstract: We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update. Finally, we show that the method is applicable to several cases of smooth and non-smooth optimization and show superior results to established accelerated solvers.
Full work available at URL: https://doi.org/10.1137/22m1532548
Numerical optimization and variational techniques (65K10) Convex programming (90C25) Artificial neural networks and deep learning (68T07) Large-scale problems in mathematical programming (90C06) Numerical methods based on nonlinear programming (49M37)
This page was built for publication: Accelerated Forward-Backward Optimization using Deep Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6367408)