Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations (Q1785197): Difference between revisions

From MaRDI portal
ReferenceBot (talk | contribs)
Changed an Item
Created claim: DBLP publication ID (P1635): journals/mp/EsfahaniK18, #quickstatements; #temporary_batch_1731505720702
 
Property / DBLP publication ID
 
Property / DBLP publication ID: journals/mp/EsfahaniK18 / rank
 
Normal rank

Latest revision as of 15:06, 13 November 2024

scientific article
Language Label Description Also known as
English
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
scientific article

    Statements

    Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations (English)
    0 references
    0 references
    28 September 2018
    0 references
    \noindent Considered is the following stochastic program \[ J^{\ast}=\inf_{x\in \mathbb{X}}\left\{ \mathbb{E}^{\mathbb{P}}\left[ h\left( x,\xi \right) \right] =\int_{\Xi}h\left( x,\xi \right) \mathbb{P} \left( \mathrm{d}\xi \right) \right\} \] with a feasible set \(\mathbb{X}\subseteq \mathbb{R}^{n}\), an uncertainty set \(\Xi \subseteq \mathbb{R}^{m}\) and a loss function \(h:\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{\bar{R}}\) which depends both on the decision vector \(x\in \mathbb{R}^{n}\) and the random vector \(\xi \in \mathbb{R}^{m}\), whose distribution \(\mathbb{P}\) is supported on \(\Xi \). In most situations of practical interest the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, the authors construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and they seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. In this paper, the authors demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstain balls can in fact be reformulated as finite convex programs -- in many interesting cases even as tractable linear programs. They also show that their solutions enjoy powerful finite-sample performance guarantees. Finally, the theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quatification.
    0 references
    stochastic programming
    0 references
    convex programming
    0 references
    minimax problems
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers