Characterization of the equivalence of robustification and regularization in linear and matrix regression

From MaRDI portal
Publication:723995

DOI10.1016/J.EJOR.2017.03.051zbMATH Open1403.62040arXiv1411.6160OpenAlexW2963772730WikidataQ89226031 ScholiaQ89226031MaRDI QIDQ723995FDOQ723995


Authors: Martin S. Copenhaver, Dimitris Bertsimas Error creating thumbnail:


Publication date: 25 July 2018

Published in: European Journal of Operational Research (Search for Journal in Brave)

Abstract: The notion of developing statistical methods in machine learning which are robust to adversarial perturbations in the underlying data has been the subject of increasing interest in recent years. A common feature of this work is that the adversarial robustification often corresponds exactly to regularization methods which appear as a loss function plus a penalty. In this paper we deepen and extend the understanding of the connection between robustification and regularization (as achieved by penalization) in regression problems. Specifically, (a) in the context of linear regression, we characterize precisely under which conditions on the model of uncertainty used and on the loss function penalties robustification and regularization are equivalent, and (b) we extend the characterization of robustification and regularization to matrix regression problems (matrix completion and Principal Component Analysis).


Full work available at URL: https://arxiv.org/abs/1411.6160




Recommendations




Cites Work


Cited In (35)

Uses Software





This page was built for publication: Characterization of the equivalence of robustification and regularization in linear and matrix regression

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q723995)