The oracle inequalities on simultaneous Lasso and Dantzig selector in high-dimensional nonparametric regression (Q473837): Difference between revisions
From MaRDI portal
Created a new Item |
Changed an Item |
||
Property / review text | |||
Summary: During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, earlier papers discussed the relations between Lasso and Dantzig selector and derived sparsity oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss. In this paper, we point out that some of the authors overemphasize the role of some sparsity conditions, and the assumptions based on this sparsity condition may cause bad results. We give better assumptions and the methods that avoid using the sparsity condition. As a comparison with the results by \textit{P. J. Bickel} et al. [Ann. Stat. 37, No. 4, 1705--1732 (2009; Zbl 1173.62022)], more precise oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss are derived when the number of variables can be much larger than the sample size. | |||
Property / review text: Summary: During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, earlier papers discussed the relations between Lasso and Dantzig selector and derived sparsity oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss. In this paper, we point out that some of the authors overemphasize the role of some sparsity conditions, and the assumptions based on this sparsity condition may cause bad results. We give better assumptions and the methods that avoid using the sparsity condition. As a comparison with the results by \textit{P. J. Bickel} et al. [Ann. Stat. 37, No. 4, 1705--1732 (2009; Zbl 1173.62022)], more precise oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss are derived when the number of variables can be much larger than the sample size. / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 62J07 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 62G08 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6372521 / rank | |||
Normal rank |
Revision as of 17:24, 30 June 2023
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The oracle inequalities on simultaneous Lasso and Dantzig selector in high-dimensional nonparametric regression |
scientific article |
Statements
The oracle inequalities on simultaneous Lasso and Dantzig selector in high-dimensional nonparametric regression (English)
0 references
24 November 2014
0 references
Summary: During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, earlier papers discussed the relations between Lasso and Dantzig selector and derived sparsity oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss. In this paper, we point out that some of the authors overemphasize the role of some sparsity conditions, and the assumptions based on this sparsity condition may cause bad results. We give better assumptions and the methods that avoid using the sparsity condition. As a comparison with the results by \textit{P. J. Bickel} et al. [Ann. Stat. 37, No. 4, 1705--1732 (2009; Zbl 1173.62022)], more precise oracle inequalities for the prediction risk and bounds on the \(L_p\) estimation loss are derived when the number of variables can be much larger than the sample size.
0 references