Minimax risks for sparse regressions: ultra-high dimensional phenomenons

From MaRDI portal
Publication:1950804




Abstract: Consider the standard Gaussian linear regression model Y=Xheta+epsilon, where YinRn is a response vector and XinRnp is a design matrix. Numerous work have been devoted to building efficient estimators of heta when p is much larger than n. In such a situation, a classical approach amounts to assume that heta0 is approximately sparse. This paper studies the minimax risks of estimation and testing over classes of k-sparse vectors heta. These bounds shed light on the limitations due to high-dimensionality. The results encompass the problem of prediction (estimation of Xheta), the inverse problem (estimation of heta0) and linear testing (testing Xheta=0). Interestingly, an elbow effect occurs when the number of variables klog(p/k) becomes large compared to n. Indeed, the minimax risks and hypothesis separation distances blow up in this ultra-high dimensional setting. We also prove that even dimension reduction techniques cannot provide satisfying results in an ultra-high dimensional setting. Moreover, we compute the minimax risks when the variance of the noise is unknown. The knowledge of this variance is shown to play a significant role in the optimal rates of estimation and testing. All these minimax bounds provide a characterization of statistical problems that are so difficult so that no procedure can provide satisfying results.



Cites work


Cited in
(53)






This page was built for publication: Minimax risks for sparse regressions: ultra-high dimensional phenomenons

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1950804)