On the exponential rate of convergence of the least squares estimator in the nonlinear regression model with Gaussian errors (Q796937)

From MaRDI portal
scientific article
Language Label Description Also known as
English
On the exponential rate of convergence of the least squares estimator in the nonlinear regression model with Gaussian errors
scientific article

    Statements

    On the exponential rate of convergence of the least squares estimator in the nonlinear regression model with Gaussian errors (English)
    0 references
    1984
    0 references
    Let \({\hat \theta}{}_ n\) be a least squares estimator in a nonlinear regression model \(Y_ n=g_ n(\theta)+\epsilon_ n\), \(n\geq 1\), \(\theta \in \Theta \subset R\), \(\epsilon_ n\) i.i.d. (0,1)-normal r.v. Let K be compact in \(\Theta\). Suppose there exist constants \(k_ i>0\), \(i=1,2\), such that \[ nk_ 1(\theta_ 1-\theta_ 2)^ 2\leq \psi_ n(\theta_ 1,\theta_ 2)\leq nk_ 2(\theta_ 1-\theta_ 2)^ 2 \] for all \(\theta_ 1,\theta_ 2\in \Theta\), where \(\psi_ n(\theta_ 1,\theta_ 2)=\sum^{n}_{i=1}[g_ i(\theta_ 1)-g_ i(\theta_ 2)]^ 2.\) Then there exist constants B, b, both positive, depending on K such that \(\sup_{\theta \in K}P_{\theta}\{n^{{1\over2}}| {\hat \theta}_ n-\theta |>\rho \}\leq Be^{b\rho^ 2}\) for any \(\rho>0\) and \(n\geq 1\).
    0 references
    0 references
    exponential rate of convergence
    0 references
    Gaussian errors
    0 references
    least squares estimator
    0 references