Bayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input Data

From MaRDI portal
Publication:6188508

DOI10.1137/22M1478951arXiv2202.07581OpenAlexW4391223750MaRDI QIDQ6188508FDOQ6188508


Authors: Tianyi Liu, Yifan Lin, Enlu Zhou Edit this on Wikidata


Publication date: 7 February 2024

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: We consider stochastic optimization under distributional uncertainty, where the unknown distributional parameter is estimated from streaming data that arrive sequentially over time. Moreover, data may depend on the decision of the time when they are generated. For both decision-independent and decision-dependent uncertainties, we propose an approach to jointly estimate the distributional parameter via Bayesian posterior distribution and update the decision by applying stochastic gradient descent on the Bayesian average of the objective function. Our approach converges asymptotically over time and achieves the convergence rates of classical SGD in the decision-independent case. We demonstrate the empirical performance of our approach on both synthetic test problems and a classical newsvendor problem.


Full work available at URL: https://arxiv.org/abs/2202.07581




Recommendations




Cites Work


Cited In (4)





This page was built for publication: Bayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input Data

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6188508)