A geometric integration approach to nonsmooth, nonconvex optimisation

From MaRDI portal
Publication:2088134

DOI10.1007/S10208-020-09489-2zbMATH Open1500.65033arXiv1807.07554OpenAlexW3187980583MaRDI QIDQ2088134FDOQ2088134

Carola-Bibiane Schönlieb, Matthias J. Ehrhardt, G. R. W. Quispel, Erlend S. Riis

Publication date: 21 October 2022

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Abstract: The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh--Abe methods. These are generalisations of the Itoh--Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well-defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh--Abe methods are superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while remaining competitive in terms of efficiency.


Full work available at URL: https://arxiv.org/abs/1807.07554





Cites Work


Cited In (5)

Uses Software






This page was built for publication: A geometric integration approach to nonsmooth, nonconvex optimisation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2088134)