Probabilistic diophantine approximation. I: Kronecker sequences (Q1841990)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Probabilistic diophantine approximation. I: Kronecker sequences
scientific article

    Statements

    Probabilistic diophantine approximation. I: Kronecker sequences (English)
    0 references
    14 September 1995
    0 references
    [The first printing of this article in Ann. Math., II. Ser. 140, No. 1, 109-160 (1994) has been with drawn by the editors.] Let \(\alpha\) be a real number and denote by \[ \Delta (\alpha, N)= \max_{1\leq n\leq N} \biggl| \sum_{\substack{ 1\leq i\leq n\\ 0\leq i\alpha< x\bmod 1}} 1- nx \biggr| \] the discrepancy function of the sequence \((n\alpha)\) modulo 1. The investigation of this function is of central interest in diophantine approximation and in the theory of uniform distribution modulo 1. Khintchine (1923) proved that for almost all \(\alpha\) the discrepancy function \(\Delta (\alpha, N)\) is \(O(\log N(\log \log N)^{1+ \varepsilon})\) for arbitrary \(\varepsilon>0\). \textit{W. Schmidt} [Trans. Am. Math. Soc. 110, 493-518 (1964; Zbl 0199.094)] could extend this result of Khintchine to higher dimensions. He proved for the discrepancy function \(\Delta (\alpha, N)\) of the point sequence \((\alpha n)\) in \(\mathbb{R}^ k\bmod 1\), \(\alpha= (\alpha_ 1, \dots, \alpha_ k)\), the metrical result: \[ \Delta (\alpha, N)= O\bigl( (\log N)^{k+1+ \varepsilon} \bigr) \quad \text{for almost all} \quad \alpha\in \mathbb{R}^ k. \] This is the best possible bound via the Erdős- Turan-Koksma inequality. In the present paper the author makes use of a completely different method to obtain the following criterion: \[ \Delta (\alpha, N)= O\bigl( (\log N)^ k \varphi (\log\log N)\bigr) \iff \sum_{n=1}^ \infty {1\over {\varphi (n)}} <\infty, \] where \(\varphi\) is an arbitrary positive increasing function. Setting \(\varphi (x)= x(\log x)^{1+ \varepsilon}\) yields an improvement of Schmidt's bound by the ``critical'' factor \(\log N\), which comes from the Erdős-Turan- Koksma inequality. The proof of this very important result, which reaches the conjectured best possible order of the discrepancy function (apart from log log- factors) is based on a Fourier analysis approach. The Poisson summation formula yields an explicit representation of the ``local'' discrepancy function. From this representation the theorem is proved by observing that the ``main'' terms cancel out and by estimating the ``small'' exponential sums via ``geometric'' arguments involving an important ``Key Lemma''. In a second part of the paper (which is announced) the author will prove central limit theorems and laws of iterated logarithms.
    0 references
    discrepancy function
    0 references
    uniform distribution
    0 references
    improvement of Schmidt's bound
    0 references
    best possible order
    0 references
    Fourier analysis
    0 references
    Poisson summation formula
    0 references
    0 references

    Identifiers