Variance-type estimation of long memory (Q1593608): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claims
RedirectionBot (talk | contribs)
Changed an Item
Property / author
 
Property / author: Donatas Surgailis / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Jiří Anděl / rank
 
Normal rank

Revision as of 04:30, 10 February 2024

scientific article
Language Label Description Also known as
English
Variance-type estimation of long memory
scientific article

    Statements

    Variance-type estimation of long memory (English)
    0 references
    0 references
    0 references
    0 references
    17 January 2001
    0 references
    Consider a discrete, stationary Gaussian process \(\{X_t\}\) with covariance function \(r(j)\sim \sigma ^2 j^{-\theta}\) as \(j\to \infty \), where \(0<\theta <1\). Then \(\{X_t\}\) is long range dependent. An aggregate variance estimator \(\hat {\theta}_m\) of the parameter \(\theta \) is based on a procedure when the series \(X_1,\dots ,X_N\) is divided into blocks of length \(m=o(N)\) and the observations in each block are replaced by their sample mean. The aggregated series is closer to Gaussian fractional noise than \(\{X_t\}\), but \(\hat {\theta}_m\) has a serious bias of order \(1/\log m\). The authors analyze a refined estimator \(\hat {\theta}\) which is based on least-squares regression across varying levels of aggregation. It is shown that \(\hat {\theta}\) is less biased than \(\hat {\theta}_m\). If \(0.5<\theta <1\) then \(\hat {\theta}\) has a constant rate of convergence and asymptotically normal distribution. For \(0<\theta <0.5\) the rate of convergence varies with \(\theta \) and the asymptotic distribution is complicated.
    0 references
    aggregation
    0 references
    long memory
    0 references
    semiparametric model
    0 references
    0 references

    Identifiers