Asymptotic accuracy of the least-squares estimates in nearly nonstationary autoregressive models (Q1366380): Difference between revisions
From MaRDI portal
ReferenceBot (talk | contribs) Changed an Item |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/bf02473977 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2115128125 / rank | |||
Normal rank |
Latest revision as of 09:02, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Asymptotic accuracy of the least-squares estimates in nearly nonstationary autoregressive models |
scientific article |
Statements
Asymptotic accuracy of the least-squares estimates in nearly nonstationary autoregressive models (English)
0 references
29 October 1997
0 references
Suppose that for each \(n\geq 1\) \((X_{n,k}, k=0,1,\dots,n)\) is a first-order nearly nonstationary autoregressive process \[ X_{n,0}=0,\quad X_{n,k}=\beta_nX_{n,k-1}+\varepsilon_{n,k},\;k=1,\dots,n, \] where \(\beta_n=1+\gamma/n\) with \(\gamma<0\) is an unknown parameter and where \((\varepsilon_{n,k},k=1,\dots,n)\) is a random noise. The least-squares estimate of the parameter \(\beta_n\) based on the observations \(X_{n,1},\dots,X_{n,n}\) is given by \[ \widehat\beta_n=\left(\sum^n_{k=1}X^2_{n,k-1}\right)^{-1}\sum^n_{k=1}X_{n,k} X_{n,k-1} \] provided \(\sum^n_{k=1}X^2_{n,k-1}\neq 0\). In recent years there has been considerable interest shown to the asymptotic properties of the estimates of parameters of nearly nonstationary AR models. In particular, it was shown by \textit{N.H. Chan} and \textit{C.Z. Wei} [Ann. Stat. 15, 1050-1063 (1987; Zbl 0638.62082)] and by \textit{P.C.B. Philips} [Biometrika 74, 535-547 (1987; Zbl 0654.62073)], that under fairly general conditions on the noise \((\varepsilon_{n,k})\), \[ n(\widehat\beta_n-\beta_n)@>{\mathcal D}>n\to\infty>Z:=\left(\int^1_0Y^2(t)dt\right)^{-1}\int^1_0Y(t) dW(t).\tag{*} \] Here \(W(t)\), \(t\in[0,1]\), denotes the standard Wiener process whereas \(Y(t)\), \(t\in[0,1]\), is the Ornstein-Uhlenbeck process defined as the solution of the stochastic differential equation \[ dY(t)=\gamma Y(t) dt+dW(t),\quad Y(0)=0. \] As usual, \(@>{\mathcal D}>>\) means the convergence in distribution. Our goal is to investigate the convergence (*) with respect to a smooth functions topology using an approach based on the convergence rate results in the central limit theorems in Banach spaces. Besides both, smooth functions topology and uniform distance over balls are involved. Ito's lemma is the main link between asymptotic behaviour of the error \(\widehat\beta_n-\beta_n\) and the central limit theorem in Banach spaces.
0 references
least-square estimates
0 references
autoregressive process
0 references
Banach spaces
0 references
central limit theorem
0 references