Penalized least squares estimation in the additive model with different smoothness for the components
From MaRDI portal
Publication:2348102
DOI10.1016/J.JSPI.2015.02.003zbMath1328.62256arXiv1405.6584OpenAlexW2049932513MaRDI QIDQ2348102
Publication date: 10 June 2015
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Abstract: We consider an additive regression model consisting of two components $f^0$ and $g^0$, where the first component $f^0$ is in some sense "smoother" than the second $g^0$. Smoothness is here described in terms of a semi-norm on the class of regression functions. We use a penalized least squares estimator $(hat f, hat g)$ of $(f^0, g^0)$ and show that the rate of convergence for $hat f $ is faster than the rate of convergence for $hat g$. In fact, both rates are generally as fast as in the case where one of the two components is known. The theory is illustrated by a simulation study. Our proofs rely on recent results from empirical process theory.
Full work available at URL: https://arxiv.org/abs/1405.6584
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The existence and asymptotic properties of a backfitting projection algorithm under weak conditions
- Nonparametric regression with the scale depending on auxiliary variable
- Optimal estimation in additive regression models
- Subspaces and orthogonal decompositions generated by bounded orthogonal systems
- Additive regression and other nonparametric models
- Weak convergence and empirical processes. With applications to statistics
- On the conditions used to prove oracle results for the Lasso
- An adaptive compression algorithm in Besov spaces
- Statistical inference in compound functional models
- On the uniform convergence of empirical norms and inner products, with application to causal inference
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- The Partial Linear Model in High Dimensions
- A practical guide to splines.
Related Items (6)
Slope heuristics and V-Fold model selection in heteroscedastic regression using strongly localized bases ⋮ On concentration for (regularized) empirical risk minimization ⋮ On the uniform convergence of empirical norms and inner products, with application to causal inference ⋮ Minimax optimal estimation in partially linear additive models under high dimension ⋮ Gradient-based Regularization Parameter Selection for Problems With Nonsmooth Penalty Functions ⋮ Unnamed Item
This page was built for publication: Penalized least squares estimation in the additive model with different smoothness for the components