Variable selection for partially linear models via learning gradients (Q2408225): Difference between revisions
From MaRDI portal
Changed an Item |
Set profile property. |
||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank |
Latest revision as of 07:02, 5 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Variable selection for partially linear models via learning gradients |
scientific article |
Statements
Variable selection for partially linear models via learning gradients (English)
0 references
12 October 2017
0 references
Variable selection for a partially linear model (PLM) of the form \(y = \mathbf{z}^T\beta + f(\mathbf{w}) + \epsilon\) is studied with no constraints on \(f\) employing a gradient learning strategy as in [\textit{L. Yang} et al., J. Mach. Learn. Res. 17, Paper No. 82, 24 p. (2016; Zbl 1360.62199)]. In contrast to the nonparametric regression model in [loc. cit.], the PLM is more parsimonious and hence computationally more amenable. A component of \(\mathbf{w}\) is irrelevant for the regression problem, if its partial derivative of \(f\) is zero. Reproducing kernel Hilbert space techniques are used to learn the gradient \(\mathbf{g}\) of \(f\). (Group) Lasso variable selection is employed including an asymptotic estimation and selection consistency.
0 references
partially linear model (PLM)
0 references
group Lasso
0 references
learning gradients
0 references
reproducing kernel Hilbert space (RKHS)
0 references
variable selection
0 references