Recovering best statistical guarantees via the empirical divergence-based distributionally robust optimization

From MaRDI portal
Publication:5129181

DOI10.1287/OPRE.2018.1786zbMATH Open1455.90122arXiv1605.09349OpenAlexW2962771675WikidataQ89699888 ScholiaQ89699888MaRDI QIDQ5129181FDOQ5129181


Authors: Henry Lam Edit this on Wikidata


Publication date: 26 October 2020

Published in: Operations Research (Search for Journal in Brave)

Abstract: We investigate the use of distributionally robust optimization (DRO) as a tractable tool to recover the asymptotic statistical guarantees provided by the Central Limit Theorem, for maintaining the feasibility of an expected value constraint under ambiguous probability distributions. We show that using empirically defined Burg-entropy divergence balls to construct the DRO can attain such guarantees. These balls, however, are not reasoned from the standard data-driven DRO framework since by themselves they can have low or even zero probability of covering the true distribution. Rather, their superior statistical performances are endowed by linking the resulting DRO with empirical likelihood and empirical processes. We show that the sizes of these balls can be optimally calibrated using chi-square process excursion. We conduct numerical experiments to support our theoretical findings.


Full work available at URL: https://arxiv.org/abs/1605.09349




Recommendations




Cites Work


Cited In (21)

Uses Software





This page was built for publication: Recovering best statistical guarantees via the empirical divergence-based distributionally robust optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5129181)