Penalized least squares estimation in the additive model with different smoothness for the components
From MaRDI portal
Publication:2348102
DOI10.1016/j.jspi.2015.02.003zbMath1328.62256arXiv1405.6584OpenAlexW2049932513MaRDI QIDQ2348102
Publication date: 10 June 2015
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1405.6584
Related Items
Slope heuristics and V-Fold model selection in heteroscedastic regression using strongly localized bases, On concentration for (regularized) empirical risk minimization, On the uniform convergence of empirical norms and inner products, with application to causal inference, Minimax optimal estimation in partially linear additive models under high dimension, Gradient-based Regularization Parameter Selection for Problems With Nonsmooth Penalty Functions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The existence and asymptotic properties of a backfitting projection algorithm under weak conditions
- Nonparametric regression with the scale depending on auxiliary variable
- Optimal estimation in additive regression models
- Subspaces and orthogonal decompositions generated by bounded orthogonal systems
- Additive regression and other nonparametric models
- Weak convergence and empirical processes. With applications to statistics
- On the conditions used to prove oracle results for the Lasso
- An adaptive compression algorithm in Besov spaces
- Statistical inference in compound functional models
- On the uniform convergence of empirical norms and inner products, with application to causal inference
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- The Partial Linear Model in High Dimensions
- A practical guide to splines.