A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers

From MaRDI portal
Publication:5965308


DOI10.1214/12-STS400zbMath1331.62350arXiv1010.2731MaRDI QIDQ5965308

Bin Yu, Martin J. Wainwright, Sahand N. Negahban, Pradeep Ravikumar

Publication date: 3 March 2016

Published in: Statistical Science (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1010.2731


62J07: Ridge regression; shrinkage estimators (Lasso)


Related Items

Robust transfer learning of high-dimensional generalized linear model, Multiple change points detection in high-dimensional multivariate regression, Generalized linear models with structured sparsity estimators, Rejoinder to “Reader reaction to ‘Outcome‐adaptive Lasso: Variable selection for causal inference’ by Shortreed and Ertefaie (2017)”, Multivariate functional response low‐rank regression with an application to brain imaging data, Oracle inequality for sparse trace regression models with exponential \(\beta\)-mixing errors, Inference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependence, High-Dimensional Gaussian Graphical Regression Models with Covariates, Concave Likelihood-Based Regression with Finite-Support Response Variables, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, The rate of convergence for sparse and low-rank quantile trace regression, Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression, Integrative sparse reduced-rank regression via orthogonal rotation for analysis of high-dimensional multi-source data, Sparse Laplacian shrinkage for nonparametric transformation survival model, Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension, Correlation Tensor Decomposition and Its Application in Spatial Imaging Data, Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage, Profile GMM estimation of panel data models with interactive fixed effects, A Likelihood-Based Approach for Multivariate Categorical Response Regression in High Dimensions, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, Rate-optimal robust estimation of high-dimensional vector autoregressive models, Robust matrix estimations meet Frank-Wolfe algorithm, Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements, A convex-Nonconvex strategy for grouped variable selection, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, The nonparametric Box-Cox model for high-dimensional regression analysis, Multi-Task Learning with High-Dimensional Noisy Images, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, A network Lasso model for regression, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, Two-stage communication-efficient distributed sparse M-estimation with missing data, Variable selection and regularization via arbitrary rectangle-range generalized elastic net, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, The Lasso with general Gaussian designs with applications to hypothesis testing, Carving model-free inference, Envelopes and principal component regression, Multiple Change Point Detection in Reduced Rank High Dimensional Vector Autoregressive Models, Approximate Selective Inference via Maximum Likelihood, Statistical performance of quantile tensor regression with convex regularization, Expectile trace regression via low-rank and group sparsity regularization, Distributed estimation and inference for spatial autoregression model with large scale networks, High-dimensional functional graphical model structure learning via neighborhood selection approach, A joint estimation for the high-dimensional regression modeling on stratified data, A Cluster Elastic Net for Multivariate Regression, Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data, On asymptotically optimal confidence regions and tests for high-dimensional models, Regularized estimation in sparse high-dimensional time series models, Transfer Learning under High-dimensional Generalized Linear Models, Confidence intervals for high-dimensional inverse covariance estimation, A Note on Coding and Standardization of Categorical Variables in (Sparse) Group Lasso Regression, On solutions of sparsity constrained optimization, Worst possible sub-directions in high-dimensional models, An analysis of penalized interaction models, Minimum distance Lasso for robust high-dimensional regression, Exact post-selection inference, with application to the Lasso, Comprehensive comparative analysis and identification of RNA-binding protein domains: multi-class classification and feature selection, Joint estimation of precision matrices in heterogeneous populations, Geometric inference for general high-dimensional linear inverse problems, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Sharp MSE bounds for proximal denoising, Sub-optimality of some continuous shrinkage priors, Oracle inequalities for the lasso in the Cox model, The geometry of least squares in the 21st century, Simple bounds for recovering low-complexity models, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Sharp recovery bounds for convex demixing, with applications, The log-linear group-lasso estimator and its asymptotic properties, Uniqueness conditions for low-rank matrix recovery, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, Estimation of high-dimensional partially-observed discrete Markov random fields, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, Robust inference on average treatment effects with possibly more covariates than observations, Decomposable norm minimization with proximal-gradient homotopy algorithm, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Generalized M-estimators for high-dimensional Tobit I models, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness, Fast global convergence of gradient methods for high-dimensional statistical recovery, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Stability of the elastic net estimator, Nonnegative-Lasso and application in index tracking, A general family of trimmed estimators for robust high-dimensional data analysis, Distributed testing and estimation under sparse high dimensional models, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, Scalable methods for Bayesian selective inference, Trace regression model with simultaneously low rank and row(column) sparse parameter, Robust shrinkage estimation and selection for functional multiple linear model through LAD loss, Multivariate factorizable expectile regression with application to fMRI data, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, A constrained \(\ell1\) minimization approach for estimating multiple sparse Gaussian or nonparanormal graphical models, An analysis of the SPARSEVA estimate for the finite sample data case, High-dimensional grouped folded concave penalized estimation via the LLA algorithm, Pathwise coordinate optimization for sparse learning: algorithm and theory, Regularization and the small-ball method. I: Sparse recovery, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, The convex geometry of linear inverse problems, The Lasso problem and uniqueness, Restricted strong convexity implies weak submodularity, The landscape of empirical risk for nonconvex losses, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Asymptotic properties on high-dimensional multivariate regression M-estimation, Consistent multiple changepoint estimation with fused Gaussian graphical models, Graphical-model based high dimensional generalized linear models, Matrix optimization based Euclidean embedding with outliers, An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Integrative methods for post-selection inference under convex constraints, The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy, Sampling from non-smooth distributions through Langevin diffusion, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Inference for high-dimensional varying-coefficient quantile regression, Sparse regression for extreme values, The finite sample properties of sparse M-estimators with pseudo-observations, Asymptotic linear expansion of regularized M-estimators, Quantile regression feature selection and estimation with grouped variables using Huber approximation, A Lagrange-Newton algorithm for sparse nonlinear programming, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, High-performance statistical computing in the computing environments of the 2020s, On the grouping effect of the \(l_{1-2}\) models, High dimensional generalized linear models for temporal dependent data, Covariate-adjusted inference for differential analysis of high-dimensional networks, Penalized and constrained LAD estimation in fixed and high dimension, Testability of high-dimensional linear models with nonsparse structures, Adaptive log-density estimation, Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data, Regularized high dimension low tubal-rank tensor regression, Post-model-selection inference in linear regression models: an integrated review, Penalized least square in sparse setting with convex penalty and non Gaussian errors, A data-driven line search rule for support recovery in high-dimensional data analysis, Gradient projection Newton pursuit for sparsity constrained optimization, Estimating sparse networks with hubs, Prediction error after model search, Robust machine learning by median-of-means: theory and practice, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, A two-step method for estimating high-dimensional Gaussian graphical models, Statistical analysis of sparse approximate factor models, Variable selection for sparse logistic regression, Is distribution-free inference possible for binary regression?, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Fast and Reliable Parameter Estimation from Nonlinear Observations, High-dimensional estimation with geometric constraints: Table 1., Hard thresholding regression, A Tight Bound of Hard Thresholding, Simultaneous Clustering and Estimation of Heterogeneous Graphical Models, Cut Pursuit: Fast Algorithms to Learn Piecewise Constant Functions on General Weighted Graphs, Minimax Optimal Procedures for Locally Private Estimation, Group Regularized Estimation Under Structural Hierarchy, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, FACTORISABLE MULTITASK QUANTILE REGRESSION, A study on tuning parameter selection for the high-dimensional lasso, A Generic Path Algorithm for Regularized Statistical Estimation, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Unnamed Item, Unnamed Item, Unnamed Item, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Color Image Inpainting via Robust Pure Quaternion Matrix Completion: Error Bound and Weighted Loss, Ising model selection using ℓ 1-regularized linear regression: a statistical mechanics analysis*, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Overlapping group lasso for high-dimensional generalized linear models, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, A robust high dimensional estimation of a finite mixture of the generalized linear model, Post-selection inference of generalized linear models based on the lasso and the elastic net, High-dimensional dynamic systems identification with additional constraints, Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations, Online Decision Making with High-Dimensional Covariates, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, Robust Wasserstein profile inference and applications to machine learning, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, Noise-Robust Modes of the Retinal Population Code Have the Geometry of “Ridges” and Correspond to Neuronal Communities, Unnamed Item, Introduction to the special issue on sparsity and regularization methods, Structured sparsity through convex optimization, Quasi-likelihood and/or robust estimation in high dimensions, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Matrix Completion Methods for Causal Panel Data Models, Counterfactual Analysis With Artificial Controls: Inference, High Dimensions, and Nonstationarity, High-dimensional rank-based graphical models for non-Gaussian functional data, Model selection for high-dimensional linear regression with dependent observations, Finite-sample analysis of \(M\)-estimators using self-concordance, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Sorted concave penalized regression, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, SGL-SVM: a novel method for tumor classification via support vector machine with sparse group lasso, On rank estimators in increasing dimensions, Sharp oracle inequalities for low-complexity priors, Optimality conditions for rank-constrained matrix optimization, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Generalized high-dimensional trace regression via nuclear norm regularization, High-dimensional generalized linear models incorporating graphical structure among predictors, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, Sparsistency and agnostic inference in sparse PCA, Structured estimation for the nonparametric Cox model, On model selection consistency of regularized M-estimators, High dimensional single index models, A group VISA algorithm for variable selection, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, The degrees of freedom of partly smooth regularizers, A unified penalized method for sparse additive quantile models: an RKHS approach, Structure learning of sparse directed acyclic graphs incorporating the scale-free property, Graph-based sparse linear discriminant analysis for high-dimensional classification, Minimax sparse principal subspace estimation in high dimensions, Strong oracle optimality of folded concave penalized estimation, Penalised robust estimators for sparse and high-dimensional linear models, High-dimensional robust regression with \(L_q\)-loss functions, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, High-dimensional VARs with common factors, Low Complexity Regularization of Linear Inverse Problems, Lasso with convex loss: Model selection consistency and estimation, Variable selection for semiparametric regression models with iterated penalisation, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares, Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization, PUlasso: High-Dimensional Variable Selection With Presence-Only Data, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, Dynamic Assortment Personalization in High Dimensions, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing


Uses Software


Cites Work