Shannon's inequality for the Rényi entropy and an application to the uncertainty principle (Q2149498)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Shannon's inequality for the Rényi entropy and an application to the uncertainty principle |
scientific article |
Statements
Shannon's inequality for the Rényi entropy and an application to the uncertainty principle (English)
0 references
29 June 2022
0 references
For a nonnegative function \(f\in \mathbb{L}^{1}(\mathbb{R}^{n})\) with \(\left\| f\right\|_{1}=1\), the quantity defined by \[ h[f]= - \int_{\mathbb{R}^{n}}f(x)\log (f(x))dx \] is called the Shannon entropy. \textit{C. E. Shannon} himself proved [Bell Syst. Tech. J. 27, 379--423, 623--656 (1948; Zbl 1154.94303)] that the Shannon entropy is bounded by the second moment of the function. More precisely, if \(f\in \mathbb{L}^{2}(\mathbb{R}^{n})\) with \(\left\| f\right\|_{1}=1\), then we have \[ -\int_{\mathbb{R}^{n}}f(x)\log(f(x))dx\leq \frac{n}{2}\log \left(\frac{2\pi e}{n}\int_{\mathbb{R}^{n}}|x|^{2}f(x)dx\right), \] where the constant \(\dfrac{2\pi e}{n}\) is optimal. For \(0 <\alpha< +\infty\) with \(\alpha \neq 1\), the Rényi entropy \(h_{\alpha}\) is defined by \[ h_{\alpha}[f]= \frac{1}{1-\alpha}\log \left(\int_{\mathbb{R}^{n}} f(x)^{\alpha}dx\right) \] for functions \(f\in \mathbb{L}^{1}(\mathbb{R}^{n})\) with \(\left\| f\right\|_{1}=1\). The main purpose of this paper is to prove the analog of the above inequality for Rényi entropies. More exactly, the author proves that if \(\alpha>0\) with \(\alpha\neq 1\) and \[ b> \begin{cases} 0, & \text{ if }\alpha>1\\ n\left(\frac{1}{\alpha}-1\right), & \text{ if }0< \alpha <1, \end{cases} \] then for all nonnegative function \(f\in \mathbb{L}^{1}_{b}(\mathbb{R}^{n})\) with \(\left\| f\right\|_{1}=1\), there exists a constant \(C_{b}\) such that \[ \frac{1}{1-\alpha}\log \left(\int_{\mathbb{R}^{n}} f(x)^{\alpha}dx\right) \leq \frac{n}{b}\log \left(C_{b}\int_{\mathbb{R}^{n}}|x|^{b}f(x)dx\right). \] Further, the constant \(C_{b}\) is explicitly given and proved that it is optimal.
0 references
Shannon entropy
0 references
Shannon inequality
0 references
Rényi entropy
0 references
uncertainty principle
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references