Information-theoretic determination of minimax rates of convergence

From MaRDI portal
Publication:1578277

DOI10.1214/aos/1017939142zbMath0978.62008OpenAlexW1524622012MaRDI QIDQ1578277

Andrew R. Barron, Yuhong Yang

Publication date: 24 January 2002

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1102&context=stat_las_preprints



Related Items

Concentration behavior of the penalized least squares estimator, Minimax rates for conditional density estimation via empirical entropy, Van Trees inequality, group equivariance, and estimation of principal subspaces, Orthogonal statistical learning, Spline local basis methods for nonparametric density estimation, Neural network approximation and estimation of classifiers with classification boundary in a Barron class, Minimax rate of distribution estimation on unknown submanifolds under adversarial losses, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, On least squares estimation under heteroscedastic and heavy-tailed errors, On strong Hellinger consistency of posterior distributions, Obtaining minimax lower bounds: a review, High-dimensional analysis of semidefinite relaxations for sparse principal components, Empirical variance minimization with applications in variance reduction and optimal control, Convergence rates for Bayesian density estimation of infinite-dimensional exponential families, Convergence rates of deep ReLU networks for multiclass classification, Matrix completion via max-norm constrained optimization, Geometric inference for general high-dimensional linear inverse problems, Bandit and covariate processes, with finite or non-denumerable set of arms, Empirical risk minimization in inverse problems, Leave-One-Out Bounds for Kernel Methods, A complement to Le Cam's theorem, Dynamically integrated regression model for online auction data, Fast learning rates in statistical inference through aggregation, Rejoinder, MULTI-ARMED BANDITS WITH COVARIATES:THEORY AND APPLICATIONS, Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies, Unnamed Item, Localization of VC classes: beyond local Rademacher complexities, From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion, Risk bounds for statistical learning, Minimax bounds for sparse PCA with noisy high-dimensional data, Multivariate intensity estimation via hyperbolic wavelet selection, Qualitative Robustness in Bayesian Inference, Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning, Minimax estimation in sparse canonical correlation analysis, On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces, Model selection in nonparametric regression, Entropy of convex functions on \(\mathbb R^d\), On Rates of Convergence for Bayesian Density Estimation, Unnamed Item, Finite mixture regression: a sparse variable selection by model selection for clustering, Unnamed Item, Estimating conditional quantiles with the help of the pinball loss, Convergence of posterior distribution in the mixture of regressions, A recursive procedure for density estimation on the binary hypercube, Model selection by resampling penalization, Convergence of functional \(k\)-nearest neighbor regression estimate with functional responses, Bayesian sieve methods: approximation rates and adaptive posterior contraction rates, Regularized least-squares regression: learning from a sequence, Sparse PCA: optimal rates and adaptive estimation, Adaptive distributed methods under communication constraints, Parametric or nonparametric? A parametricness index for model selection, Minimax bounds for estimation of normal mixtures, Minimax adaptive dimension reduction for regression, Minimax-rate adaptive nonparametric regression with unknown correlations of errors, Combinatorial inference for graphical models, Catching up Faster by Switching Sooner: A Predictive Approach to Adaptive Estimation with an Application to the AIC–BIC Dilemma, Consistent Model Selection and Data-Driven Smooth Tests for Longitudinal Data in the Estimating Equations Approach, Local entropy in learning theory, A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation, Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence, A new method for estimation and model selection: \(\rho\)-estimation, Unnamed Item, ON RATE OPTIMALITY FOR ILL-POSED INVERSE PROBLEMS IN ECONOMETRICS, Asymptotic Theory of Information-Theoretic Experimental Design, Exponential series estimator of multivariate densities, Adaptation to anisotropy and inhomogeneity via dyadic piecewise polynomial selection, Unnamed Item, On the Consistency of Bayesian Function Approximation Using Step Functions, Fast learning rates for plug-in classifiers, Global and local two-sample tests via regression, Minimax Optimal Procedures for Locally Private Estimation, ON CONVERGENCE RATES FOR NONPARAMETRIC POSTERIOR DISTRIBUTIONS, Covering numbers for bounded variation functions, Kernel regression estimation in a Banach space, Mixing least-squares estimators when the variance is unknown, Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy-Krause variation, Cross-validation for comparing multiple density estimation procedures, Sup–Hellinger consistency for local density regression, Double-smoothing for bias reduction in local linear regression, Oracle posterior contraction rates under hierarchical priors, Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space, Metric Entropy for Functions of Bounded Total Generalized Variation, Rates of Convergence for a Bayesian Level Set Estimation, LOCALIZED MODEL SELECTION FOR REGRESSION, Set structured global empirical risk minimizers are rate optimal in general dimensions, Consistency of restricted maximum likelihood estimators of principal components, A note on Bayesian nonparametric regression function estimation, On Rates of Convergence for Posterior Distributions Under Misspecification, Convergence rates of least squares regression estimators with heavy-tailed errors, A Direct Approach to Understanding Posterior Consistency of Bayesian Regression Problems, Minimax optimal goodness-of-fit testing for densities and multinomials under a local differential privacy constraint, Structured matrix estimation and completion, On nonlinear ill-posed inverse problems with applications to pricing of defaultable bonds and option pricing, Information-theoretic determination of minimax rates of convergence, Isotonic regression in general dimensions, Unnamed Item, Privacy Aware Learning, Combining different procedures for adaptive regression, Mixing strategies for density estimation., Unnamed Item, A Universal Prior Distribution for Bayesian Consistency of Non parametric Procedures, Minimax-optimal nonparametric regression in high dimensions, On the Bias, Risk, and Consistency of Sample Means in Multi-armed Bandits, Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits Under Realizability, Spline adaptation to smoothness, Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations, Model selection for regression on a random design



Cites Work