From -entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
DOI10.1214/009053606000000704zbMATH Open1106.62005arXivmath/0702653OpenAlexW2086333522MaRDI QIDQ869967FDOQ869967
Authors: Tong Zhang
Publication date: 12 March 2007
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0702653
Recommendations
- The complexity of estimating min-entropy
- \(k_n\)-nearest neighbor estimators of entropy
- Entropic measures, Markov information sources and complexity
- On information gain, Kullback-Leibler divergence, entropy production and the involution kernel
- Some further results on the minimum error entropy estimation
- Comparison of maximum entropy and minimal mutual information in a nonlinear setting
- The complexity of estimating Rényi entropy
- Unifying computational entropies via Kullback-Leibler divergence
- Sublinear estimation of entropy and information distances
Statistical aspects of information-theoretic topics (62B10) Bayesian inference (62F15) Density estimation (62G07) Bayesian problems; characterization of Bayes procedures (62C10)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Asymptotic methods in statistical decision theory
- Convergence rates of posterior distributions.
- Convergence of estimates under dimensionality restrictions
- Title not available (Why is that?)
- Minimum complexity density estimation
- Title not available (Why is that?)
- Rates of convergence of posterior distributions.
- The consistency of posterior distributions in nonparametric problems
- On Bayesian consistency
- 10.1162/1532443041424300
- Information-theoretic determination of minimax rates of convergence
- Title not available (Why is that?)
- PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification
Cited In (42)
- Bayesian fractional posteriors
- Optimal rates of entropy estimation over Lipschitz balls
- New estimates for Csiszár divergence and Zipf-Mandelbrot entropy via Jensen-Mercer's inequality
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Consistency and generalization bounds for maximum entropy density estimation
- Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence
- Gibbs posterior for variable selection in high-dimensional classification and data mining
- Approximate models and robust decisions
- A Justification for Applying the Principle of Minimum Relative Entropy to Information Integration Problems
- Gibbs posterior inference on value-at-risk
- Kolmogrov's \(\varepsilon\)-entropy and the problem of statistical estimation
- Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities
- Learning Theory
- Mirror averaging with sparsity priors
- Generalized Bayes Quantification Learning under Dataset Shift
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Adaptive Bayesian density estimation with location-scale mixtures
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Title not available (Why is that?)
- Generalized mirror averaging and \(D\)-convex aggregation
- Sparse recovery in convex hulls via entropy penalization
- Heavy-tailed Bayesian nonparametric adaptation
- Dynamics of Bayesian updating with dependent data and misspecified models
- Adaptive variational Bayes: optimality, computation and applications
- Chaining meets chain rule: multilevel entropic regularization and training of neural networks
- Joint production in stochastic non-parametric envelopment of data with firm-specific directions
- Quasi-Bayesian analysis of nonparametric instrumental variables models
- Adaptive variable selection for sequential prediction in multivariate dynamic models
- Minimum description length revisited
- Contextuality of misspecification and data-dependent losses
- A comparison of learning rate selection methods in generalized Bayesian inference
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Gibbs posterior inference on multivariate quantiles
- Model-free posterior inference on the area under the receiver operating characteristic curve
- Posterior concentration and fast convergence rates for generalized Bayesian learning
- On general Bayesian inference using loss functions
- Predicting Panel Data Binary Choice with the Gibbs Posterior
- Learning by mirror averaging
- Approximating Bayes in the 21st century
- Gibbs posterior convergence and the thermodynamic formalism
- Model misspecification, Bayesian versus credibility estimation, and Gibbs posteriors
- Linear and convex aggregation of density estimators
Uses Software
This page was built for publication: From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q869967)