Mirror averaging with sparsity priors
From MaRDI portal
Publication:442083
DOI10.3150/11-BEJ361zbMath1243.62008arXiv1003.1189MaRDI QIDQ442083
Arnak S. Dalalyan, Alexandre B. Tsybakov
Publication date: 9 August 2012
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1003.1189
Density estimation (62G07) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Linear regression; mixed models (62J05) Bayesian inference (62F15) Statistical decision theory (62C99)
Related Items
Entropic optimal transport is maximum-likelihood deconvolution, Exponential weights in multivariate regression and a low-rankness favoring prior, PAC-Bayesian risk bounds for group-analysis sparse regression by exponential weighting, Sharp oracle inequalities for aggregation of affine estimators, Sparse regression learning by aggregation and Langevin Monte-Carlo, PAC-Bayesian bounds for sparse regression estimation with exponential weights, Optimal learning with \textit{Q}-aggregation, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, Exponential screening and optimal rates of sparse estimation, Prediction error bounds for linear regression with the TREX, A quasi-Bayesian perspective to online clustering, Adaptive Bayesian density regression for high-dimensional data, Sparse estimation by exponential weighting, On the exponentially weighted aggregate with the Laplace prior
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Exponential screening and optimal rates of sparse estimation
- The Dantzig selector and sparsity oracle inequalities
- Generalized mirror averaging and \(D\)-convex aggregation
- PAC-Bayesian bounds for randomized empirical risk minimizers
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sparsity in penalized empirical risk minimization
- Nonlinear estimation over weak Besov spaces and minimax Bayes
- From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Learning by mirror averaging
- A universal procedure for aggregating estimators
- Mixing least-squares estimators when the variance is unknown
- Sparse recovery in convex hulls via entropy penalization
- PAC-Bayesian stochastic model selection
- Aggregating regression procedures to improve performance
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- On the optimality of the aggregate with exponential weights for low temperatures
- Fast learning rates in statistical inference through aggregation
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Aggregation for Gaussian regression
- On optimality of Bayesian testimation in the normal means problem
- Optimal rates of aggregation in classification under low noise assumption
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Empirical Bayes selection of wavelet thresholds
- Better Subset Regression Using the Nonnegative Garrote
- On the Generalization Ability of On-Line Learning Algorithms
- Stable recovery of sparse overcomplete representations in the presence of noise
- Information Theory and Mixing Least-Squares Regressions
- Smoothing of Multivariate Data
- Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
- Adaptive Regression by Mixing
- Sequential prediction of individual sequences under general loss functions
- Learning Theory and Kernel Machines
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Regularization and Variable Selection Via the Elastic Net
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Prediction, Learning, and Games
- Convexity, Classification, and Risk Bounds