Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
From MaRDI portal
Publication:5272318
DOI10.1109/TIT.2011.2165799zbMath1365.62276arXiv0910.2042OpenAlexW2159700154MaRDI QIDQ5272318
Martin J. Wainwright, Garvesh Raskutti, Bin Yu
Publication date: 12 July 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0910.2042
Nonparametric regression and quantile regression (62G08) Linear regression; mixed models (62J05) Minimax procedures in statistical decision theory (62C20) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items
An Exact and Robust Conformal Inference Method for Counterfactual and Synthetic Controls, Bayesian Regression Using a Prior on the Model Fit: The R2-D2 Shrinkage Prior, Unnamed Item, Unnamed Item, Unnamed Item, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, High-Dimensional Factor Regression for Heterogeneous Subpopulations, Robust transfer learning of high-dimensional generalized linear model, Regularized Estimation in High-Dimensional Vector Auto-Regressive Models Using Spatio-Temporal Information, Greedy Variable Selection for High-Dimensional Cox Models, Orthogonalized Kernel Debiased Machine Learning for Multimodal Data Analysis, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Optimal false discovery control of minimax estimators, Sparse quantile regression, Kernel Ordinary Differential Equations, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, Integrative Factor Regression and Its Inference for Multimodal Data Analysis, Minimax rates for conditional density estimation via empirical entropy, Rate-optimal robust estimation of high-dimensional vector autoregressive models, UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, Understanding Implicit Regularization in Over-Parameterized Single Index Model, Unnamed Item, Independently Interpretable Lasso for Generalized Linear Models, Consistent parameter estimation for Lasso and approximate message passing, The Lasso for High Dimensional Regression with a Possible Change Point, Prediction risk for the horseshoe regression, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Discussion of: ``Grouping strategies and thresholding for high dimension linear models, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, Unnamed Item, Unnamed Item, Unnamed Item, Canonical thresholding for nonsparse high-dimensional linear regression, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, Significance testing in non-sparse high-dimensional linear models, On the prediction loss of the Lasso in the partially labeled setting, Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Best subset selection via a modern optimization lens, An analysis of penalized interaction models, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Testability of high-dimensional linear models with nonsparse structures, Obtaining minimax lower bounds: a review, SLOPE is adaptive to unknown sparsity and asymptotically minimax, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Sparse PCA-based on high-dimensional Itô processes with measurement errors, Sub-optimality of some continuous shrinkage priors, Oracle Estimation of a Change Point in High-Dimensional Quantile Regression, Trace regression model with simultaneously low rank and row(column) sparse parameter, Unnamed Item, On the optimality of sliced inverse regression in high dimensions, Best subset binary prediction, Sparsity identification for high-dimensional partially linear model with measurement error, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, Adaptive risk bounds in univariate total variation denoising and trend filtering, On estimation error bounds of the Elastic Net when p ≫ n, On constrained and regularized high-dimensional regression, Estimating piecewise monotone signals, Nearly optimal minimax estimator for high-dimensional sparse linear regression, Accuracy assessment for high-dimensional linear regression, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Adaptive and optimal online linear regression on \(\ell^1\)-balls, Grouping strategies and thresholding for high dimensional linear models, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces, Nearly optimal Bayesian shrinkage for high-dimensional regression, Unnamed Item, On estimation of isotonic piecewise constant signals, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, A two-step method for estimating high-dimensional Gaussian graphical models, Unnamed Item, High-dimensional estimation with geometric constraints: Table 1., Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements, Model selection in regression under structural constraints, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Kullback-Leibler aggregation and misspecified generalized linear models, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, Minimax sparse principal subspace estimation in high dimensions, A general framework for Bayes structured linear models, Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator, Aggregation of affine estimators, Estimation and variable selection with exponential weights, An asymptotically minimax kernel machine, A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, Entropy numbers of finite-dimensional embeddings, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, PUlasso: High-Dimensional Variable Selection With Presence-Only Data, Minimax optimal estimation in partially linear additive models under high dimension, A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation, Pathwise coordinate optimization for sparse learning: algorithm and theory, Decomposable norm minimization with proximal-gradient homotopy algorithm, Slope meets Lasso: improved oracle bounds and optimality, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, The DFS Fused Lasso: Linear-Time Denoising over General Graphs, A Tight Bound of Hard Thresholding, Tuning parameter selection for the adaptive LASSO in the autoregressive model, Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates, Exponential screening and optimal rates of sparse estimation, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Greedy variance estimation for the LASSO, Unnamed Item, Regression in Tensor Product Spaces by the Method of Sieves, Kernel Knockoffs Selection for Nonparametric Additive Models, Minimax rates in network analysis: graphon estimation, community detection and hypothesis testing, The Geometry of Differential Privacy: The Small Database and Approximate Cases, OR Forum—An Algorithmic Approach to Linear Regression, Sparse recovery via differential inclusions, Estimating multi-index models with response-conditional least squares, Robust regression via mutivariate regression depth, Sparse Sliced Inverse Regression Via Lasso, Fast global convergence of gradient methods for high-dimensional statistical recovery, \(\ell_{2,0}\)-norm based selection and estimation for multivariate generalized linear models, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Sharp oracle inequalities for low-complexity priors, Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors, Convergence rates of least squares regression estimators with heavy-tailed errors, Compound Poisson point processes, concentration and oracle inequalities, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Regularization and the small-ball method II: complexity dependent error rates, Optimal linear discriminators for the discrete choice model in growing dimensions, Robust subset selection, Linear Hypothesis Testing in Dense High-Dimensional Linear Models, Stochastic continuum-armed bandits with additive models: minimax regrets and adaptive algorithm, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, Unnamed Item, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Minimax-optimal nonparametric regression in high dimensions, High dimensional generalized linear models for temporal dependent data, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Orthogonal one step greedy procedure for heteroscedastic linear models