Information-theoretic upper and lower bounds for statistical estimation
From MaRDI portal
Publication:3547876
DOI10.1109/TIT.2005.864439zbMath1320.94033OpenAlexW2130797782MaRDI QIDQ3547876
No author found.
Publication date: 21 December 2008
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tit.2005.864439
Related Items
Learning with Limited Samples: Meta-Learning and Applications to Communication Systems ⋮ Fast learning rates in statistical inference through aggregation ⋮ Unnamed Item ⋮ GENERAL INEQUALITIES FOR GIBBS POSTERIOR WITH NONADDITIVE EMPIRICAL RISK ⋮ Posterior concentration and fast convergence rates for generalized Bayesian learning ⋮ Model-free posterior inference on the area under the receiver operating characteristic curve ⋮ A Bayesian approach to (online) transfer learning: theory and algorithms ⋮ Robust posterior inference for Youden’s index cutoff ⋮ Minimax rates for conditional density estimation via empirical entropy ⋮ User-friendly Introduction to PAC-Bayes Bounds ⋮ Robust and rate-optimal Gibbs posterior inference on the boundary of a noisy image ⋮ Joint production in stochastic non-parametric envelopment of data with firm-specific directions ⋮ `Purposely misspecified' posterior inference on the volatility of a jump diffusion process ⋮ Adaptive variable selection for sequential prediction in multivariate dynamic models ⋮ Gibbs posterior concentration rates under sub-exponential type losses ⋮ Adaptive Bayesian density estimation with location-scale mixtures ⋮ Model misspecification, Bayesian versus credibility estimation, and Gibbs posteriors ⋮ Quasi-Bayesian analysis of nonparametric instrumental variables models ⋮ Fano's inequality for random variables ⋮ Using the doubling dimension to analyze the generalization of learning algorithms ⋮ Bayesian inference on volatility in the presence of infinite jump activity and microstructure noise ⋮ Gibbs posterior for variable selection in high-dimensional classification and data mining ⋮ RISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTION ⋮ A note on some algorithms for the Gibbs posterior ⋮ Unnamed Item ⋮ The benefit of group sparsity ⋮ Contextuality of misspecification and data-dependent losses ⋮ Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity ⋮ Sparse recovery in convex hulls via entropy penalization ⋮ Gibbs posterior inference on multivariate quantiles ⋮ Gibbs posterior inference on value-at-risk ⋮ On general Bayesian inference using loss functions ⋮ On the properties of variational approximations of Gibbs posteriors ⋮ Minimum description length revisited ⋮ Predicting Panel Data Binary Choice with the Gibbs Posterior ⋮ Unnamed Item ⋮ Gibbs posterior convergence and the thermodynamic formalism