From -entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
From MaRDI portal
(Redirected from Publication:869967)
From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
Abstract: We consider an extension of -entropy to a KL-divergence based complexity measure for randomized density estimation methods. Based on this extension, we develop a general information-theoretical inequality that measures the statistical complexity of some deterministic and randomized density estimators. Consequences of the new inequality will be presented. In particular, we show that this technique can lead to improvements of some classical results concerning the convergence of minimum description length and Bayesian posterior distributions. Moreover, we are able to derive clean finite-sample convergence bounds that are not obtainable using previous approaches.
Recommendations
- The complexity of estimating min-entropy
- k_n-nearest neighbor estimators of entropy
- Entropic measures, Markov information sources and complexity
- On information gain, Kullback-Leibler divergence, entropy production and the involution kernel
- Some further results on the minimum error entropy estimation
- Comparison of maximum entropy and minimal mutual information in a nonlinear setting
- The complexity of estimating Rényi entropy
- Unifying computational entropies via Kullback-Leibler divergence
- Sublinear estimation of entropy and information distances
Cites work
- scientific article; zbMATH DE number 3173999 (Why is no real title available?)
- scientific article; zbMATH DE number 45100 (Why is no real title available?)
- scientific article; zbMATH DE number 1420699 (Why is no real title available?)
- 10.1162/1532443041424300
- Asymptotic methods in statistical decision theory
- Convergence of estimates under dimensionality restrictions
- Convergence rates of posterior distributions.
- Information-theoretic determination of minimax rates of convergence
- Minimum complexity density estimation
- On Bayesian consistency
- PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification
- Rates of convergence of posterior distributions.
- The consistency of posterior distributions in nonparametric problems
- Weak convergence and empirical processes. With applications to statistics
Cited in
(42)- Gibbs posterior inference on multivariate quantiles
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Adaptive Bayesian density estimation with location-scale mixtures
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Learning Theory
- Mirror averaging with sparsity priors
- Learning by mirror averaging
- New estimates for Csiszár divergence and Zipf-Mandelbrot entropy via Jensen-Mercer's inequality
- Sparse recovery in convex hulls via entropy penalization
- Model-free posterior inference on the area under the receiver operating characteristic curve
- Consistency and generalization bounds for maximum entropy density estimation
- Dynamics of Bayesian updating with dependent data and misspecified models
- Heavy-tailed Bayesian nonparametric adaptation
- Quasi-Bayesian analysis of nonparametric instrumental variables models
- Chaining meets chain rule: multilevel entropic regularization and training of neural networks
- Posterior concentration and fast convergence rates for generalized Bayesian learning
- On general Bayesian inference using loss functions
- Gibbs posterior convergence and the thermodynamic formalism
- Bayesian fractional posteriors
- Approximate models and robust decisions
- scientific article; zbMATH DE number 7625184 (Why is no real title available?)
- Generalized mirror averaging and \(D\)-convex aggregation
- Kolmogrov's \(\varepsilon\)-entropy and the problem of statistical estimation
- Model misspecification, Bayesian versus credibility estimation, and Gibbs posteriors
- Approximating Bayes in the 21st century
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Joint production in stochastic non-parametric envelopment of data with firm-specific directions
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Adaptive variable selection for sequential prediction in multivariate dynamic models
- Minimum description length revisited
- Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence
- Generalized Bayes Quantification Learning under Dataset Shift
- A comparison of learning rate selection methods in generalized Bayesian inference
- Contextuality of misspecification and data-dependent losses
- A Justification for Applying the Principle of Minimum Relative Entropy to Information Integration Problems
- Linear and convex aggregation of density estimators
- Gibbs posterior inference on value-at-risk
- Predicting Panel Data Binary Choice with the Gibbs Posterior
- Gibbs posterior for variable selection in high-dimensional classification and data mining
- Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities
- Adaptive variational Bayes: optimality, computation and applications
- Optimal rates of entropy estimation over Lipschitz balls
This page was built for publication: From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q869967)