Ensemble-Based Gradient Inference for Particle Methods in Optimization and Sampling

From MaRDI portal
Revision as of 08:00, 10 July 2024 by Import240710060729 (talk | contribs) (Created automatically from import240710060729)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:6177924

DOI10.1137/22M1533281zbMath1518.65142arXiv2209.15420MaRDI QIDQ6177924

Philipp Wacker, Claudia Schillings, Claudia Totzeck

Publication date: 31 August 2023

Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)

Abstract: We propose an approach based on function evaluations and Bayesian inference to extract higher-order differential information of objective functions {from a given ensemble of particles}. Pointwise evaluation ${V(x^i)}_i$ of some potential $V$ in an ensemble ${x^i}_i$ contains implicit information about first or higher order derivatives, which can be made explicit with little computational effort (ensemble-based gradient inference -- EGI). We suggest to use this information for the improvement of established ensemble-based numerical methods for optimization and sampling such as Consensus-based optimization and Langevin-based samplers. Numerical studies indicate that the augmented algorithms are often superior to their gradient-free variants, in particular the augmented methods help the ensembles to escape their initial domain, to explore multimodal, non-Gaussian settings and to speed up the collapse at the end of optimization dynamics.} The code for the numerical examples in this manuscript can be found in the paper's Github repository (https://github.com/MercuryBench/ensemble-based-gradient.git).


Full work available at URL: https://arxiv.org/abs/2209.15420





Cites Work


Related Items (2)





This page was built for publication: Ensemble-Based Gradient Inference for Particle Methods in Optimization and Sampling