A new perspective on least squares under convex constraint

From MaRDI portal
Publication:482891

DOI10.1214/14-AOS1254zbMATH Open1302.62053arXiv1402.0830MaRDI QIDQ482891FDOQ482891


Authors: Sourav Chatterjee Edit this on Wikidata


Publication date: 6 January 2015

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Consider the problem of estimating the mean of a Gaussian random vector when the mean vector is assumed to be in a given convex set. The most natural solution is to take the Euclidean projection of the data vector on to this convex set; in other words, performing "least squares under a convex constraint." Many problems in modern statistics and statistical signal processing theory are special cases of this general situation. Examples include the lasso and other high-dimensional regression techniques, function estimation problems, matrix estimation and completion, shape-restricted regression, constrained denoising, linear inverse problems, etc. This paper presents three general results about this problem, namely, (a) an exact computation of the main term in the estimation error by relating it to expected maxima of Gaussian processes (existing results only give upper bounds), (b) a theorem showing that the least squares estimator is always admissible up to a universal constant in any problem of the above kind and (c) a counterexample showing that least squares estimator may not always be minimax rate-optimal. The result from part (a) is then used to compute the error of the least squares estimator in two examples of contemporary interest.


Full work available at URL: https://arxiv.org/abs/1402.0830




Recommendations




Cites Work


Cited In (34)

Uses Software





This page was built for publication: A new perspective on least squares under convex constraint

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q482891)