Selection of variables and dimension reduction in high-dimensional non-parametric regression
From MaRDI portal
Publication:1951796
DOI10.1214/08-EJS327zbMath1320.62085arXiv0811.1115MaRDI QIDQ1951796
Guillaume Lecué, Karine Bertin
Publication date: 24 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0811.1115
62G08: Nonparametric regression and quantile regression
Related Items
Improvement on LASSO-type estimator in nonparametric regression, Variable selection in heteroscedastic single-index quantile regression, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, High-dimensional local linear regression under sparsity and convex losses, Learning sparse gradients for variable selection and dimension reduction, Minimal conditions for consistent variable selection in high dimension, Tight conditions for consistency of variable selection in the context of high dimensionality, Statistical inference in sparse high-dimensional additive models, Bias-corrected inference for multivariate nonparametric regression: model selection and oracle property, Variable selection of high-dimensional non-parametric nonlinear systems by derivative averaging to avoid the curse of dimensionality, Variable selection with Hamming loss, Variable selection consistency of Gaussian process regression, Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees, GRID: a variable selection and structure discovery method for high dimensional nonparametric regression, Statistical inference in compound functional models, Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix
Cites Work
- Unnamed Item
- Unnamed Item
- Minimax theory of image reconstruction
- Fast learning rates for plug-in classifiers
- Lasso-type recovery of sparse representations for high-dimensional data
- Robust reconstruction of functions by the local-approximation method
- Simultaneous analysis of Lasso and Dantzig selector
- Rodeo: Sparse, greedy nonparametric regression
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Learning Theory and Kernel Machines
- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data