Minimax risks for sparse regressions: ultra-high dimensional phenomenons
From MaRDI portal
Publication:1950804
DOI10.1214/12-EJS666zbMath1334.62120arXiv1008.0526OpenAlexW2964097857MaRDI QIDQ1950804
Publication date: 28 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1008.0526
dimension reductionadaptive estimationminimax riskhigh-dimensional regressionhigh-dimensional geometry
Related Items (46)
High-dimensional asymptotics of likelihood ratio tests in the Gaussian sequence model under convex constraints ⋮ Adaptive robust estimation in sparse vector model ⋮ SLOPE is adaptive to unknown sparsity and asymptotically minimax ⋮ Solution of linear ill-posed problems by model selection and aggregation ⋮ A posterior probability approach for gene regulatory network inference in genetic perturbation data ⋮ Minimax adaptive tests for the functional linear model ⋮ Unnamed Item ⋮ Estimating minimum effect with outlier selection ⋮ How can we identify the sparsity structure pattern of high-dimensional data: an elementary statistical analysis to interpretable machine learning ⋮ Minimax rate of testing in sparse linear regression ⋮ Optimal detection of sparse principal components in high dimension ⋮ Accuracy assessment for high-dimensional linear regression ⋮ Estimation of the \(\ell_2\)-norm and testing in sparse linear regression with unknown variance ⋮ Estimation of functionals of sparse covariance matrices ⋮ Optimization of sampling designs for pedigrees and association studies ⋮ Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage ⋮ Inference for High-Dimensional Linear Mixed-Effects Models: A Quasi-Likelihood Approach ⋮ Unnamed Item ⋮ Inferring large graphs using \(\ell_1\)-penalized likelihood ⋮ Minimax risks for sparse regressions: ultra-high dimensional phenomenons ⋮ Detection boundary in sparse regression ⋮ Adaptive confidence sets in shape restricted regression ⋮ Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator ⋮ Estimation and variable selection with exponential weights ⋮ Sparse regression and support recovery with \(\mathbb{L}_2\)-boosting algorithms ⋮ Adaptive estimation of the sparsity in the Gaussian vector model ⋮ Minimax optimal estimation in partially linear additive models under high dimension ⋮ Slope meets Lasso: improved oracle bounds and optimality ⋮ Optimal adaptive estimation of linear functionals under sparsity ⋮ Beyond support in two-stage variable selection ⋮ A global homogeneity test for high-dimensional linear regression ⋮ Greedy variance estimation for the LASSO ⋮ Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix ⋮ Block-Diagonal Covariance Selection for High-Dimensional Gaussian Graphical Models ⋮ High-dimensional regression with unknown variance ⋮ Optimal sparsity testing in linear regression model ⋮ Robust regression via mutivariate regression depth ⋮ Tight conditions for consistency of variable selection in the context of high dimensionality ⋮ Variable selection consistency of Gaussian process regression ⋮ Sharp oracle inequalities for low-complexity priors ⋮ Regularization and the small-ball method II: complexity dependent error rates ⋮ The all-or-nothing phenomenon in sparse linear regression ⋮ Global and Simultaneous Hypothesis Testing for High-Dimensional Logistic Regression Models ⋮ Minimax-optimal nonparametric regression in high dimensions ⋮ Detecting positive correlations in a multivariate sample ⋮ Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Exponential screening and optimal rates of sparse estimation
- Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism
- Near-ideal model selection by \(\ell _{1}\) minimization
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- High-dimensional Gaussian model selection on a Gaussian design
- Gaussian model selection with an unknown variance
- A simple proof of the restricted isometry property for random matrices
- On minimax estimation of a sparse normal mean vector
- Asymptotically minimax hypothesis testing for nonparametric alternatives. I
- Asymptotically minimax hypothesis testing for nonparametric alternatives. II
- Asymptotically minimax hypothesis testing for nonparametric alternatives. III
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Adaptive tests of linear hypotheses by model selection
- Dimension reduction for conditional mean in regression
- Adaptive detection of a signal of growing dimension. I
- Adaptive detection of a signal of growing dimension. II
- Minimax detection of a signal for \(l^ n\)-balls.
- Non-asymptotic minimax rates of testing in signal detection
- Higher criticism for detecting sparse heterogeneous mixtures.
- Least angle regression. (With discussion)
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Estimation of Gaussian graphs by model selection
- MAP model selection in Gaussian regression
- Detection boundary in sparse regression
- Minimal penalties for Gaussian model selection
- Goodness-of-fit tests for high-dimensional Gaussian linear models
- Simultaneous analysis of Lasso and Dantzig selector
- Kernel dimension reduction in regression
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- A New Lower Bound for Multiple Hypothesis Testing
- Decoding by Linear Programming
- Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
- Smallest singular value of a random rectangular matrix
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Regularization and Variable Selection Via the Elastic Net
- An alternative point of view on Lepski's method
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Compressed sensing
- Gaussian model selection
This page was built for publication: Minimax risks for sparse regressions: ultra-high dimensional phenomenons