On bootstrapping two-stage least-squares estimates in stationary linear models (Q795447)

From MaRDI portal





scientific article; zbMATH DE number 3862257
Language Label Description Also known as
default for all languages
No label defined
    English
    On bootstrapping two-stage least-squares estimates in stationary linear models
    scientific article; zbMATH DE number 3862257

      Statements

      On bootstrapping two-stage least-squares estimates in stationary linear models (English)
      0 references
      0 references
      1984
      0 references
      For \(i=1,...,n\) the quantities \((Y_ i,U_ i,V_ i,\epsilon_ i)\) are independent and identically distributed. \(Y_ i\), \(U_ i\), and \(V_ i\) are observed, \(\epsilon_ i\) is not. \(Y_ i\) and \(\epsilon_ i\) are scalars, \(U_ i\) is one by p, \(V_ i\) is r by one. \(Y_ i=U_ iA+\epsilon_ i\) where A is p by one and unknown. \(E(V_ i\epsilon_ i)=0\). Define Q as \(E(V_ iY_ i)\), R as \(E(V_ iU_ i)\), S as \(E(V_ iV^ T_ i)\). \(r\geq p\), R has rank p, and \(S^{-1}\) exists. Define \(Q_ n\) as \((1/n)(V_ 1Y_ 1+...+V_ nY_ n), R_ n\) as \((1/n)(V_ 1U_ 1+...+V_ nU_ n), S_ n\) as \((1/n)(V_ 1V^ T_ 1+...+V_ nV^ T_ n),\) and \(\Delta_ n\) as \((1/n)(V_ 1\epsilon_ 1+...+V_ n\epsilon_ n).\) \(\hat A_ n\), the two-stage least squares estimator of A, is \((R^ T_ nS_ n^{-1}R_ n)^{-1}R^ T_ nS_ n^{-1}Q_ n.\) Define \({\hat \epsilon}{}_ i(n)\) as \(Y_ i-U_ i\hat A_ n\), \(\hat b{}_ n\) as \(S_ n^{-1}(Q_ n-R_ n\hat A_ n),\) and \({\tilde \epsilon}{}_ i(n)\) as \({\hat \epsilon}{}_ i(n)-V^ T_ i\hat b_ n\). Let \({\tilde \mu}{}_ n\) be the empirical distribution function which assigns probability 1/n to each of the n points \((U_ i,V_ i,{\tilde \epsilon}_ i(n))\). For \(j=1,...,n\), generate \((U^*_ j,V^*_ j,\epsilon^*_ j)\), which are conditionally independent, each with distribution \({\tilde \mu}{}_ n.\) Define \(Y^*_ j\) as \(U^*_ j\hat A_ n+\epsilon^*_ j\). Define \(Q^*_ n\), \(R^*_ n\), \(S^*_ n\), \(\Delta^*_ n\) and \(\hat A^*_ n\) in terms of the starred observations in exactly the same way as \(Q_ n\), \(R_ n\), \(S_ n\), \(\Delta_ n\), and \(\hat A_ n\) were defined in terms of the original observations. It is shown that for almost all sample sequences, \(Q^*_ n\to Q\), \(R^*_ n\to R\), and \(S^*_ n\to S\) in conditional probability as n increases, and that the conditional law of \(\sqrt{n}\Delta^*_ n\) and the unconditional law of \(\sqrt{n}\Delta_ n\) converge to the same limit. Analogous results are derived for other models.
      0 references
      stationary linear models
      0 references
      asymptotically valid approximations
      0 references
      distribution of errors
      0 references
      standard errors
      0 references
      two-stage least squares estimator
      0 references
      empirical distribution
      0 references
      0 references

      Identifiers