Étude de l'estimateur du maximum de vraisemblance dans le cas d'un processus autorégressif: convergence, normalité asymptotique, vitesse de convergence. (Asymptotic behaviour of maximum likelihood estimator in an autoregressive process: consistency, asymptotic distribution and expansion, rate of convergence) (Q751015)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Étude de l'estimateur du maximum de vraisemblance dans le cas d'un processus autorégressif: convergence, normalité asymptotique, vitesse de convergence. (Asymptotic behaviour of maximum likelihood estimator in an autoregressive process: consistency, asymptotic distribution and expansion, rate of convergence)
scientific article

    Statements

    Étude de l'estimateur du maximum de vraisemblance dans le cas d'un processus autorégressif: convergence, normalité asymptotique, vitesse de convergence. (Asymptotic behaviour of maximum likelihood estimator in an autoregressive process: consistency, asymptotic distribution and expansion, rate of convergence) (English)
    0 references
    1989
    0 references
    The authors consider the group G of all invertible matrices \(g=\begin{pmatrix} A(g) & b(g)\\ 0&1 \end{pmatrix}\), where \(b(g)\in R^ d\) is a column, a norm \(\|.\|\) in \(R^ d\), a probability \(\mu\) on G for which the corresponding norm \(\| A(g)\|\) is \(\mu -a.e.<1\), a sequence \(g_ n\), \(n\geq 1\), of independent \(\mu\)-distributed random variables, a fixed \(z\in R^ d\), the distribution \(\nu\) of \(\sum_{k\geq 1}A(g_ 1)...A(g_{k-1})b(g_ k)\), \(\eta: G\to [1,\infty)\), constants \(c,\gamma,\tau >0\), \(\alpha\in (0,1]\), a function \(F: G\times R^ d\to R\) satisfying \[ | F(g,x)-F(g,y)| \leq \eta (y)\| x- y\|^{\gamma}(1+\| x\|^{\tau}+\| y\|^{\tau}) \] and \[ | F(g,x)| \leq \eta (g)(\| x\|^{\gamma +\tau}+1), \] \(\begin{pmatrix} Y_ n\\ 0 \end{pmatrix} =g_ n...g_ 1\begin{pmatrix} z\\ 0\end{pmatrix}\) for \(n\geq 0\), \(S_ n=\sum^{n}_{1}F(g_ k,Y_{k-1})\), \(e=\int \int Fd\mu d\nu\) and impose \[ (*)\quad \int \eta (g)^ 5(1- \| A(g)\|^{\alpha})^{-5(\gamma +\tau)/\alpha}\exp (c\| b(g)\|^{\alpha})d\mu (g)<\infty. \] First they prove that \(\sigma^ 2=\lim_{n} n^{-1}E((S_ n-ne)^ 2)\) exists, does not depend on z, and, under a supplementary condition on F, is positive, and that \[ d(n^{-1/2}\sigma^{-1}(S_ n-ne),N)\leq Cn^{-1/2}(1+\| x\|^{\gamma \beta}\exp (\lambda \| x\|^{\alpha}) \] with constants \(C,\lambda >0\), \(\beta >1\), where d is the supremum of the modulus of the difference of the distribution functions and N is standard normal. Replacing (*) by two other conditions, each containing \(m\geq 2\) at the exponent, and requiring \(\| A(g)\| \leq \rho <1\mu\)-a.e., they prove also that \[ \sup_{n} E(| n^{- 1/2}\sum^{n}_{1}(F(g_ k,Y_{k-1})-e)|^ m)\leq C(1+\| x\|^{m(\gamma +\tau)}). \] In the proofs they use the operators \((U_ tf)(x)=\int e^{itF(g,x)}f(g,x)d\mu (g)\) and arrange a Banach space on which \(U_ t\) are quasicompact etc. Then the authors consider the particular case, corresponding to the title, of a nonrandom \(A(g_ n)=A_{\theta}=\left( \begin{matrix} \theta \\ 1\quad 0\end{matrix} \right)\), where \(\theta =(\theta_ 1,...,\theta_ d)\), \(b(g_ n)=\epsilon_ ne_ 1\), \(e_ 1=(1,0,...,0)'\), E \(\epsilon\) \({}_ n=0\), \(\max | u_ i| <1\), where \(u_ i\) are the roots of \(u^ d=\sum \theta_ iu^{d-i}\). It follows that, if \(E(\log^+| \epsilon_ k|)<\infty\), \(\nu\) is the unique invariant distribution of \(P_{\theta}\), where \((P_{\theta}f)(x)=E(f(A_{\theta}x+\epsilon_ ne_ 1)).\) The previous results are applied, and for \(m\geq 4\), \[ P(n^{- 1/2}\sum^{n}_{1}(F(\epsilon_ k,Y_{k-1})-e)>((m-3)\log n)^{1/2})=o(n^{-(m-3)/2}) \] is established. Furthermore, supposing that \(\epsilon_ n\) have a positive density f, tending to 0 at \(\infty\), with a derivative satisfying \(\int | f'|^ rf^{1-r}<\infty\) with an \(r\geq d+1\), and also a finite r-moment, the authors prove, denoting by \({\hat \theta}{}_ n\) the maximum likelihood estimator of \(\theta\) in a \(P_{\theta}\)-Markov chain starting from z, several ``statistical results''. These are: the convergence in distribution of \(n^{1/2}({\hat \theta}_ n-\theta)\) to an \(N(0,\sigma^ 2)\), the convergence of the corresponding ``absolute k-moments'' (for all k), \[ P(\| {\hat \theta}_ n-\theta \| >\rho)\leq A_ 1\exp (-A_ 2n)\text{ for } every\quad \rho >0, \] and \[ P(n^{1/2}\| {\hat \theta}_ n-\theta \| >B \log^{1/2}n)=O(n^{-1}). \] Supposing the existence of \(f^{(p+1)}\) and the validity of other conditions in which a k appears, and writing \[ n^{1/2}({\hat \theta}_ n-\theta)=h_ 1+...+n^{-(p-1)/2}h_ p+n^{-p/2}\rho (n), \] the authors show that \[ P(\| \rho (n)\| >((k-2)\log n)^{(p+1)/2})=O(n^{-(k-2)/2}) \] and also, for \(p=1\), that \[ d(n^{1/2}({\hat \theta}_ n- \theta),N(0,\sigma^ 2))\leq Cn^{-1/2}\log^{1/2}n. \] The uniformity, in z (and \(\theta\)) on compacts, of ``the results'' is established. Finally, a more precise result is announced.
    0 references
    maximum likelihood estimator
    0 references
    autoregressive process
    0 references
    speed of convergence
    0 references
    limit law
    0 references
    unique invariant distribution
    0 references
    convergence in distribution
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references