Sharp oracle inequalities for low-complexity priors
From MaRDI portal
Publication:2304249
DOI10.1007/s10463-018-0693-6zbMath1439.62141arXiv1702.03166OpenAlexW2615892432MaRDI QIDQ2304249
Christophe Chesneau, Tung Duy Luu, Jalal Fadili
Publication date: 9 March 2020
Published in: Annals of the Institute of Statistical Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1702.03166
oracle inequalityhigh-dimensional estimationlow-complexity modelspenalized estimationexponential weighted aggregation
Multivariate distribution of statistics (62H10) Estimation in multivariate analysis (62H12) Functional data analysis (62R10) Inequalities; stochastic orderings (60E15)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- The Bernstein-Orlicz norm and deviation inequalities
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Consistent group selection in high-dimensional linear regression
- Oracle inequalities and optimal inference under group sparsity
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Linear and convex aggregation of density estimators
- Near-ideal model selection by \(\ell _{1}\) minimization
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Bayesian linear regression with sparse priors
- SLOPE-adaptive variable selection via convex optimization
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Aggregating regression procedures to improve performance
- Lectures on probability theory and statistics. École d'Été de Probabilités de Saint-Flour XXVIII - 1998. Summer school, Saint-Flour, France, August 17 -- September 3, 1998
- On the exponentially weighted aggregate with the Laplace prior
- The convex geometry of linear inverse problems
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- PAC-Bayesian estimation and prediction in sparse additive models
- On the conditions used to prove oracle results for the Lasso
- A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution
- Counting the faces of randomly-projected hypercubes and orthants, with applications
- Simultaneous analysis of Lasso and Dantzig selector
- The degrees of freedom of partly smooth regularizers
- PAC-Bayesian risk bounds for group-analysis sparse regression by exponential weighting
- High-dimensional generalized linear models and the lasso
- Simultaneous adaptation to the margin and to complexity in classification
- Exact matrix completion via convex optimization
- Convex Recovery of a Structured Signal from Independent Random Linear Measurements
- Low Complexity Regularization of Linear Inverse Problems
- PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming
- Orthogonal Invariance and Identifiability
- Adaptive Minimax Estimation over Sparse $\ell_q$-Hulls
- Robust principal component analysis?
- Scaled sparse linear regression
- On sparse reconstruction from Fourier and Gaussian measurements
- Information Theory and Mixing Least-Squares Regressions
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Atomic Decomposition by Basis Pursuit
- Variational Analysis
- A new approach to variable selection in least squares problems
- Model Consistency of Partly Smooth Regularizers
- Model selection with low complexity priors
- The Generic Chaining
- Sparsity and Smoothness Via the Fused Lasso
- Equivalent Subgradient Versions of Hamiltonian and Euler–Lagrange Equations in Variational Analysis
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Uncertainty Principles and Vector Quantization
- Weakly decomposable regularization penalties and structured sparsity
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Model Selection and Estimation in Regression with Grouped Variables
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- An Introduction to Matrix Concentration Inequalities
- Convex analysis and monotone operator theory in Hilbert spaces
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Sparse estimation by exponential weighting