An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks

From MaRDI portal
Publication:5079536

DOI10.4208/JCM.2102-M2020-0339zbMATH Open1499.62346arXiv2104.06285OpenAlexW3209989675MaRDI QIDQ5079536FDOQ5079536


Authors: L. Yan, Tao Zhou Edit this on Wikidata


Publication date: 27 May 2022

Published in: Journal of Computational Mathematics (Search for Journal in Brave)

Abstract: Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.


Full work available at URL: https://arxiv.org/abs/2104.06285




Recommendations




Cites Work


Cited In (4)

Uses Software





This page was built for publication: An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5079536)