Dimension-free information concentration via exp-concavity
From MaRDI portal
Publication:4617618
zbMATH Open1406.60033arXiv1802.09301MaRDI QIDQ4617618FDOQ4617618
Authors: Ya-Ping Hsieh, Volkan Cevher
Publication date: 6 February 2019
Abstract: Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are exp-concave, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence, it can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, our concentration-of-information result immediately implies high-probability results to many of the previous bounds that only hold in expectation.
Full work available at URL: https://arxiv.org/abs/1802.09301
Recommendations
- Concentration of the information in data with log-concave distributions
- Concentration of information content for convex measures
- Learning Theory
- Optimal concentration of information content for log-concave densities
- An optimal uniform concentration inequality for discrete entropies on finite alphabets in the high-dimensional setting
log-concave measuresdifferential entropydimension-free concentrationexp-concavityvariance Brascamp-Lieb inequality
Cited In (4)
This page was built for publication: Dimension-free information concentration via exp-concavity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4617618)