On the exponentially weighted aggregate with the Laplace prior
From MaRDI portal
Publication:1800807
DOI10.1214/17-AOS1626zbMath1409.62135arXiv1611.08483OpenAlexW2557776807WikidataQ129370883 ScholiaQ129370883MaRDI QIDQ1800807
Edwin Grappin, Quentin Paris, Arnak S. Dalalyan
Publication date: 24 October 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1611.08483
exponential weightssparsityhigh-dimensional regressionoracle inequalityBayesian Lassotrace regressionlow-rank matrices
Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items (4)
Concentration of information content for convex measures ⋮ User-friendly Introduction to PAC-Bayes Bounds ⋮ Matrix factorization for multivariate time series analysis ⋮ Sharp oracle inequalities for low-complexity priors
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Conditions for posterior contraction in the sparse normal means problem
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Ordered smoothers with exponential weighting
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- Degrees of freedom in lasso problems
- Concentration inequalities for the exponential weighting method
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Optimal selection of reduced rank estimators of high-dimensional matrices
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Concentration of the information in data with log-concave distributions
- Bayesian linear regression with sparse priors
- On adaptive posterior concentration rates
- SLOPE-adaptive variable selection via convex optimization
- Learning by mirror averaging
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Sparse recovery in convex hulls via entropy penalization
- Estimating the dimension of a model
- Combining different procedures for adaptive regression
- 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation
- Mixing strategies for density estimation.
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Sharp oracle inequalities for aggregation of affine estimators
- PAC-Bayesian estimation and prediction in sparse additive models
- On the conditions used to prove oracle results for the Lasso
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- On the optimality of the aggregate with exponential weights for low temperatures
- Pivotal estimation via square-root lasso in nonparametric regression
- A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution
- Fast learning rates in statistical inference through aggregation
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Optimal rates and adaptation in the single-index model using aggregation
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Noisy low-rank matrix completion with general sampling distribution
- Aggregation of affine estimators
- Estimation and variable selection with exponential weights
- Statistical inference in compound functional models
- On the ``degrees of freedom of the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Bayesian Methods for Low-Rank Matrix Estimation: Short Survey and Theoretical Study
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Scaled sparse linear regression
- Information Theory and Mixing Least-Squares Regressions
- The Bayesian Lasso
- Bayesian lasso regression
- Adaptive Regression by Mixing
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Statistical View of Some Chemometrics Regression Tools
- Sharp Oracle Inequalities for High-Dimensional Matrix Prediction
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Sparse single-index model
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Model Selection and Estimation in Regression with Grouped Variables
- Learning Theory
- Some Comments on C P
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- A new look at the statistical model identification
This page was built for publication: On the exponentially weighted aggregate with the Laplace prior