A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method

From MaRDI portal
Publication:6301660

arXiv1805.06444MaRDI QIDQ6301660FDOQ6301660


Authors: Matthias J. Ehrhardt, Erlend S. Riis, Torbjørn Ringholm, Carola-Bibiane Schönlieb Edit this on Wikidata


Publication date: 16 May 2018

Abstract: Discrete gradient methods are geometric integration techniques that can preserve the dissipative structure of gradient flows. Due to the monotonic decay of the function values, they are well suited for general convex and nonconvex optimization problems. Both zero- and first-order algorithms can be derived from the discrete gradient method by selecting different discrete gradients. In this paper, we present a comprehensive analysis of the discrete gradient method for optimisation which provides a solid theoretical foundation. We show that the discrete gradient method is well-posed by proving the existence and uniqueness of iterates for any positive time step, and propose an efficient method for solving the associated discrete gradient equation. Moreover, we establish an O(1/k) convergence rate for convex objectives and prove linear convergence if instead the Polyak-Lojasiewicz inequality is satisfied. The analysis is carried out for three discrete gradients - the Gonzalez discrete gradient, the mean value discrete gradient, and the Itoh-Abe discrete gradient - as well as for a randomised Itoh-Abe method. Our theoretical results are illustrated with a variety of numerical experiments, and we furthermore demonstrate that the methods are robust with respect to stiffness.













This page was built for publication: A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301660)