Rate-optimal nonparametric estimation for random coefficient regression models (Q2203623)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Rate-optimal nonparametric estimation for random coefficient regression models |
scientific article |
Statements
Rate-optimal nonparametric estimation for random coefficient regression models (English)
0 references
7 October 2020
0 references
One considers the linear random coefficient regression model $Y_j=A_{0,j}+A_{1,j}X_j$ for the i.i.d data $(X_j,Y_j)$, $j=1,\dots,n$. Here, $A_j$ are unobserved i.i.d. random variables with the bivariate Lebesgue density $f_A$. Further, $A_j$ and $X_j$ are independent. The aim in this article is to obtain optimal convergence rates in the case that $f_A$ is estimated over Hölder smoothness classes and $X_j$ have a Lebesgue density $f_X$ satisfying a so-called ``design density'' inequality. The study concentrates on how the tail parameter $\beta$ in this inequality influences the optimal rate of convergence of $f_A$ at a given point $a\in\mathbb{R}^2$ in a minimax sense if $\beta>1$. The second section of the paper is devoted to the construction of an estimator for $f_A$, denoted by $\hat{f}_A(a;h,\delta)$. The procedure is inspired by an earlier paper: [\textit{S. Hoderlein} et al., J. Econom. 201, No. 1, 144--169 (2017; Zbl 1391.62058)]. Results related to the upper bound on the convergence rate for the estimator, optimality of the achieved convergence rates for the pointwise risk in minimax case and uniform rate of convergence are given in the third section. In the fourth section, results on adaptation with respect to $\beta$ for given smoothness and adaptation by the Lepinski method are shown. The fifth section is devoted to all proofs.
0 references
adaptive estimation
0 references
ill-posed inverse problem
0 references
minimax risk
0 references
nonparametric estimation
0 references
0 references
0 references