Exponential weights in multivariate regression and a low-rankness favoring prior
From MaRDI portal
Publication:2179638
DOI10.1214/19-AIHP1010zbMath1439.62164arXiv1806.09405WikidataQ115517753 ScholiaQ115517753MaRDI QIDQ2179638
Publication date: 13 May 2020
Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.09405
Estimation in multivariate analysis (62H12) Linear regression; mixed models (62J05) Inequalities; stochastic orderings (60E15)
Related Items (4)
A reduced-rank approach to predicting multiple binary responses through machine learning ⋮ User-friendly Introduction to PAC-Bayes Bounds ⋮ Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions ⋮ Matrix factorization for multivariate time series analysis
Cites Work
- Unnamed Item
- Unnamed Item
- Optimal learning with Bernstein Online Aggregation
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- Exponential screening and optimal rates of sparse estimation
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Optimal selection of reduced rank estimators of high-dimensional matrices
- Joint variable and rank selection for parsimonious estimation of high-dimensional matrices
- Learning by mirror averaging
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Minimax multiple shrinkage estimation
- Reduced-rank regression for the multivariate linear model
- Combining different procedures for adaptive regression
- An oracle inequality for quasi-Bayesian nonnegative matrix factorization
- Optimal bounds for aggregation of affine estimators
- Sharp oracle inequalities for aggregation of affine estimators
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Fast learning rates in statistical inference through aggregation
- Aggregation of affine estimators
- Bayesian Methods for Low-Rank Matrix Estimation: Short Survey and Theoretical Study
- Information Theory and Mixing Least-Squares Regressions
- Combining Minimax Shrinkage Estimators
- Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Sparsity regret bounds for individual sequences in online linear regression
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Sparse estimation by exponential weighting
This page was built for publication: Exponential weights in multivariate regression and a low-rankness favoring prior