On parameter estimation for semi-linear errors-in-variables models (Q1383909)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | On parameter estimation for semi-linear errors-in-variables models |
scientific article |
Statements
On parameter estimation for semi-linear errors-in-variables models (English)
0 references
8 April 1999
0 references
The paper studies a semi-linear structural errors-in-variables model of the form \[ Y= X'\beta+ g(T)+e, \quad X=x+u, \tag{1} \] where \(X\) and \(x\) are \(p\times 1\) random vectors; \(Y\) and \(T\) are r.v., \(T\) ranges over the interval \([0,1]\); \(e\) is an unobservable error variable and \(u\) is a \(p\times 1\) unobservable error vector with \(E[(e,u')']=0\), \(\text{Cov} [(e,u')']= \sigma^2\cdot I_{p+1}\), where \(\sigma^2>0\) is an unknown parameter, \(\beta\) is a \(p\times 1\) vector of unknown parameters, and \(g\) is an unknown function, which satisfies the Lipschitz condition. This model is a generalization of two important models: a) semilinear model \(Y= X'\beta+ g(T)+e\), and b) linear errors-in-variables model \(Y= X'\beta+e\), X\(= x+u\). Suppose that the observations \(\{X_i,T_i,Y_i\), \(1\leq i\leq n\}\) form a sample from the model (1). The estimators \(\widehat{\beta}_n\), \(\widehat{\sigma}_n^2\) and \(g_n^*\) of \(\beta\), \(\sigma^2\) and \(g\) are constructed by the nearest neighbor - generalized least squares method. It is an adaptation of the procedure used for a semilinear model by \textit{P. E. Cheng} [J. Multivariate Anal. 15, 63-72 (1984; Zbl 0542.62031)]. At the first stage the generalized least squares estimator \(\widehat{\beta}_n\) is obtained, and then the nearest neighbor estimates \(\widehat{\sigma}_n^2\) and \(g_n^*\) are built via \(\widehat{\beta}_n\). It is shown that the estimates \(\widehat{\beta}_n\) and \(\widehat{\sigma}_n^2\) are strongly consistent and asymptotically normal. The estimator \(g_n^*\) achieves an optimal rate of convergence \[ g_n^*(t)- g(t)= O_p(n^{-1/3}), \quad\text{for each }t\in[0,1]. \] The conditions of the statements look natural, some of them are necessary for studying the optimal convergence rate of the nonparametric regression estimator [see \textit{C. J. Stone}, Ann. Stat. 10, 1040-1053 (1982; Zbl 0511.62048)]. Remark of the reviewer. In the definition (2.4) of \(\widehat{\beta}_n\) the minimum need not exist. It should be explained that under condition 1 the solution \(\widehat{\beta}_n\) of (2.4) exists a.s. and can be chosen as a (measurable) random vector.
0 references
semi-linear model
0 references
nearest neighbor method
0 references
strong consistency
0 references
asymptotic normality
0 references
errors-in-variables model
0 references
0 references