On the mean square of the divisor function in short intervals (Q1032640)

From MaRDI portal
Revision as of 18:43, 18 April 2024 by Importer (talk | contribs) (‎Changed an Item)
scientific article
Language Label Description Also known as
English
On the mean square of the divisor function in short intervals
scientific article

    Statements

    On the mean square of the divisor function in short intervals (English)
    0 references
    0 references
    26 October 2009
    0 references
    The author gives non-trivial estimates of the following \(J_k\), say, the Selberg integral for the \(k\)-divisor function: \[ J_k(X,h):=\int_X^{2X}(\Delta_k(x+h) - \Delta_k(x))^2 \,dx \qquad(h = h(X)\gg1, h = o(x) \;\text{as} \;X\to\infty) \] where \(h\) lies in a suitable range. For \(k\geq2\) a fixed integer, in fact, \(\Delta_k(x)\) is the error term in the asymptotic formula for the summatory function of the \(k\)-divisor function \(d_k(n)\), generated by \(\zeta^k\) (where \(\zeta\) is the Riemann \(\zeta\)-function). Of course, this integral gives a formula for \(d_k\) in ``almost all'', say, the short intervals \([x,x+h]\) (here, ``short'' comes from above hypotheses \(h=o(x)\), with \(h\gg 1\) to avoid trivialities). Theorem 1 regards the non-trivial bounds of \(J_k\), namely \(J_k(X,h)\ll_{\varepsilon} X^{1-\varepsilon}h^2\), with \(\varepsilon>0\) dependent only on, say, the \textit{width} of the short interval \([x,x+h]\), namely \(\theta:=(\log h)/(\log X)\); in fact, He is able to exploit the information about the Riemann \(\zeta\)-function in the critical strip in the form of the Carleson abscissa, say, \(\sigma(k)\in ]0,1/2[\), defined as the least constant \(\sigma>0\) such that \[ \forall \varepsilon>0 \quad \int_0^T \left|\zeta\left(\sigma+it\right)\right|^{2k} \,dt \ll_{\varepsilon} T^{1+\varepsilon}. \] These values are not yet known, but the best known such abscissæare reported in Ivić's book on the Riemann-function [New York: John Wiley (1985; Zbl 0556.10026) (2nd ed., Mineola, N.Y.: Dover (2003; Zbl 1034.11046)] and applied here. In fact, the non-trivial estimate quoted above for \(J_k\) is obtained, \(\forall k\geq 3\), in all the widths \(\theta=\theta(k)\) such that \(2\sigma(k)-1<\theta(k)<1\), here. Actually, a more precise estimate than this one for \(k=2\) here has been obtained by the author in [Ramanujan J. 19, No. 2, 207--224 (2009; Zbl 1226.11086)], ameliorating (through Voronoï summation) elementary results of the reviewer and \textit{S. Salerno} in Acta Arith. 113, No. 2, 189--201 (2004; Zbl 1122.11062)]. Also, \[ \Sigma_k(X,h):=\sum_{X\leq x\leq 2X}(\Delta_k(x+h) - \Delta_k(x))^2, \quad \text{say, the discrete version of} \;J_k \] is linked, in Theorem 2, to \(J_k\) (namely, the error \(\Sigma_k-J_k\) is estimated), when \(k=2\) by the quoted asymptotic formula (on Ramanujan J.) and for \(k>2\) through the trivial bound for \(\Delta_k\) coming from (\(\forall \varepsilon>0\)) \(d_k(n)\ll_{\varepsilon} n^{\varepsilon}\). Apart from this rather technical result, the author gives two Corollaries to Theorem 1, namely on consequences under the Lindelöf Hypothesis (see Corollary 1) and from known values of Carleson abscissa (see Corollary 2). The methods of proof are standard analytic number theory techniques, but in the case of the proof of Theorem 1, a special ingenuity is called for, in order to apply in the optimal way the known information about the \(\zeta\)-function.
    0 references
    mean-square
    0 references
    divisor functions
    0 references
    short intervals
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references