Sharp bounds on \(p\)-norms for sums of independent uniform random variables, \(0 < p < 1\) (Q6168073)

From MaRDI portal
scientific article; zbMATH DE number 7709558
Language Label Description Also known as
English
Sharp bounds on \(p\)-norms for sums of independent uniform random variables, \(0 < p < 1\)
scientific article; zbMATH DE number 7709558

    Statements

    Sharp bounds on \(p\)-norms for sums of independent uniform random variables, \(0 < p < 1\) (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    10 July 2023
    0 references
    \textit{A. Khintchine} [Math. Z. 18, 109--116 (1923; JFM 49.0132.01)] studied moment comparison inequalities for Rademacher random signs in the proof of the law of the iterated logarithm. Obtaining sharp constants in such inequalities of the Khinchin-type is getting more interesting and challenging, and the paper under review aims to get sharp constants in \(L_p\)-\(L_2\) Khinchin inequalities for sums of independent uniform random variables with \(0<p<1\) and the \(p\)-Rényi entropy analogue. Let \(U_1, U_2, \dots, \) be independent random variables uniform on \([-1, 1]\) and \(\|X\|_p = (E|X|^p)^{1/p}\) the \(p\)-norm of a random variable \(X\). For \(p>-1\), let \(c_p\) and \(C_p\) be the best constants for any real numbers \(a_1, \dots, a_n\) and \(X=\sum_{j=1}^na_jU_j\), such that \[ c_p (\sum_{j=1}^na_j^2)^{1/2} \le \|X\|_p \le C_p (\sum_{j=1}^na_j^2)^{1/2}, \] for \(\|X\|_2 =\frac{1}{\sqrt{3}}(\sum_{j=1}^na_j^2)^{1/2}\). In other words, \[ c_p = \inf_{\|a\|_2=1}\frac{\|X\|_p}{\sqrt{3} \|X\|_2}, \ \ \ C_p= \sup_{\|a\|_2=1}\frac{\|X\|_p}{\sqrt{3} \|X\|_2}, \] for any unit vector \(a= (a_1, \dots,a_n)\in \mathbb R^n\). \textit{R. Latała} and \textit{K. Oleszkiewicz} [Colloq. Math. 68, No. 2, 197--206 (1995; Zbl 0821.60027)] found the optimal constants \(c_p\) and \(C_p\) for \(p>1\); \textit{U. Haagerup} [Stud. Math. 70, 231--283 (1982; Zbl 0501.46015)] discovered the complicated behaviour for the case of random signs and \(-1<p<0\); \textit{G. Chasapis} et al. [J. Funct. Anal. 281, No. 9, Article ID 109185, 23 p. (2021; Zbl 1482.52012)] found \(c_p\) for \(p<0\) from Ball's cube slicing inequality; \textit{A. Eskenazis} et al. [Ann. Probab. 46, No. 5, 2908--2945 (2018; Zbl 1428.60036); Adv. Math. 334, 389--416 (2018; Zbl 1435.60019)] revealed \(C_p =\|U_1\|_p\) from unimodality and Jensen's inequality for \(-1<p<1\). The remaining case for the optimal constant \(c_p\) with \(0<p<1\) is the main result of this paper, given in Theorem 1 (\(c_p =\|Z\|_p/{\sqrt{3}}\) for \(Z\) a standard Gaussian random variable \(N(0, 1)\)). In analogy to Theorem 1, the authors obtain the following lower and upper bounds for the Rényi entropy \[ h_p(U_1) \le h_p(X) \le h_p (Z/\sqrt{3}), \] for \(h_p(X) = \frac{1}{1-p}\log \int_R f^p\) for \(X = \sum_{j=1}^na_jU_j\) and \(0<p <1\) and unit vector \(a=(a_1, \dots, a_n)\). Section 2 gives an overview of the proof of Theorem 1 and breaks into two main steps (an integral inequality and an inductive argument), then collects all technical lemmas to complete the proof in Section 3--5. Theorem 2 is done in Section 6. The outline for Theorem 1 follows the approach of Haagerup [loc. cit.] with major simplifications made by \textit{F. L. Nazarov} and \textit{A. N. Podkorytov} [in: Complex analysis, operators, and related topics. The S. A. Vinogradov memorial volume. Basel: Birkhäuser. 247--267 (2000; Zbl 0969.46014)]. By Lemma 4 in Section 2, \(E|X|^p = k_p \int_0^{\infty} \frac{1- Re \phi_X(t)}{t^{p+1}}dt\) for a random variable X with characteristic funtion \(\phi_X (t) =Ee^{i t X}\). By the AM-GM inequality, \(E|X|^p \ge \sum_{j=1}^na_j^2J_p(1/{a_j^2})\) for the function \(J_p (s)=k_p \int_0^{\infty} \frac{1- |\frac{\sin t/\sqrt{s}}{t/\sqrt{s}}|^s}{t^{p+1}}dt\) for \(s\ge 1\). If \(J_p(s) \ge J_p(\infty)\) for all \(s\ge s_0 =1\) and for all \(0<p<1\), then the proof of Theorem 1 is done. Hence, Section 4 proves this holds for \(0.6<p<1\) with \(s_0=1\) in Theorem 5, and for \(0<p<1\) with \(s_0=2\), the remaining case \(0<p<0.69\) is done in Section 5. Section 3 starts with lemmas on the sine function, and concerns the sums of \(p\)-th power, then get estimates of the gamma function for the inductive part. With the modified distribution functions and a technical result of Nazarov and Podkorytov [loc. cit.], Lemma 18 is proved in Section 4 to complete the proof of Theorem 5 and 6, with different conditions and cases. Section 5 presents the inductive argument to cover the remaining case \(0<p<0.69\) for Theorem 1. The proof of Theorem 2 is done by showing the lower bound directly by the entropy power inequality, and by Hölder inequality and the main result of Latala and Oleszkiewicz [loc. cit.]. The most important and interesting case is the Shannon information entropy for \(p=1\), and the natural conjecture goes to \(h_1(X) \le h_1(\sum_{j=1}^{n}U_j/\sqrt{n})\) for every unit vector \(a\in \mathbb R^n\).
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Khinchin-type inequality
    0 references
    sum of independent random variables
    0 references
    optimal \(p\)-bounds
    0 references
    Renyi entropy
    0 references
    AM-GM inequality
    0 references
    characteristic function
    0 references
    gamma function
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references