A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers

From MaRDI portal
Revision as of 02:33, 30 January 2024 by Import240129110155 (talk | contribs) (Created automatically from import240129110155)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5965308

DOI10.1214/12-STS400zbMath1331.62350arXiv1010.2731MaRDI QIDQ5965308

Bin Yu, Martin J. Wainwright, Sahand N. Negahban, Pradeep Ravikumar

Publication date: 3 March 2016

Published in: Statistical Science (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1010.2731



Related Items

Matrix Completion Methods for Causal Panel Data Models, Counterfactual Analysis With Artificial Controls: Inference, High Dimensions, and Nonstationarity, Variable selection for semiparametric regression models with iterated penalisation, Color Image Inpainting via Robust Pure Quaternion Matrix Completion: Error Bound and Weighted Loss, Robust Decoding from 1-Bit Compressive Sampling with Ordinary and Regularized Least Squares, Ising model selection using ℓ 1-regularized linear regression: a statistical mechanics analysis*, Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Overlapping group lasso for high-dimensional generalized linear models, Unnamed Item, Unnamed Item, Unnamed Item, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, Penalised robust estimators for sparse and high-dimensional linear models, The degrees of freedom of partly smooth regularizers, A unified penalized method for sparse additive quantile models: an RKHS approach, A robust high dimensional estimation of a finite mixture of the generalized linear model, Post-selection inference of generalized linear models based on the lasso and the elastic net, High-dimensional dynamic systems identification with additional constraints, Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations, High-dimensional robust regression with \(L_q\)-loss functions, Fast and Reliable Parameter Estimation from Nonlinear Observations, Structure learning of sparse directed acyclic graphs incorporating the scale-free property, Graph-based sparse linear discriminant analysis for high-dimensional classification, High-dimensional rank-based graphical models for non-Gaussian functional data, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Unnamed Item, Unnamed Item, Unnamed Item, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, High-dimensional VARs with common factors, High-dimensional estimation with geometric constraints: Table 1., Minimax sparse principal subspace estimation in high dimensions, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, Model selection for high-dimensional linear regression with dependent observations, Online Decision Making with High-Dimensional Covariates, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, Noise-Robust Modes of the Retinal Population Code Have the Geometry of “Ridges” and Correspond to Neuronal Communities, Finite-sample analysis of \(M\)-estimators using self-concordance, A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models, Hard thresholding regression, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, PUlasso: High-Dimensional Variable Selection With Presence-Only Data, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, Unnamed Item, Unnamed Item, A Tight Bound of Hard Thresholding, Simultaneous Clustering and Estimation of Heterogeneous Graphical Models, Unnamed Item, Unnamed Item, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Cut Pursuit: Fast Algorithms to Learn Piecewise Constant Functions on General Weighted Graphs, Minimax Optimal Procedures for Locally Private Estimation, Group Regularized Estimation Under Structural Hierarchy, Sorted concave penalized regression, Strong oracle optimality of folded concave penalized estimation, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Low Complexity Regularization of Linear Inverse Problems, SGL-SVM: a novel method for tumor classification via support vector machine with sparse group lasso, Introduction to the special issue on sparsity and regularization methods, Structured sparsity through convex optimization, Quasi-likelihood and/or robust estimation in high dimensions, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, FACTORISABLE MULTITASK QUANTILE REGRESSION, A study on tuning parameter selection for the high-dimensional lasso, On rank estimators in increasing dimensions, Lasso with convex loss: Model selection consistency and estimation, Sharp oracle inequalities for low-complexity priors, Dynamic Assortment Personalization in High Dimensions, A Generic Path Algorithm for Regularized Statistical Estimation, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, Optimality conditions for rank-constrained matrix optimization, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Robust Wasserstein profile inference and applications to machine learning, Unnamed Item, Unnamed Item, Generalized high-dimensional trace regression via nuclear norm regularization, High-dimensional generalized linear models incorporating graphical structure among predictors, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Unnamed Item, Unnamed Item, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, Sparsistency and agnostic inference in sparse PCA, Unnamed Item, Unnamed Item, Unnamed Item, Structured estimation for the nonparametric Cox model, On model selection consistency of regularized M-estimators, Unnamed Item, Unnamed Item, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, High dimensional single index models, A group VISA algorithm for variable selection, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Worst possible sub-directions in high-dimensional models, Covariate-adjusted inference for differential analysis of high-dimensional networks, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, Nonnegative-Lasso and application in index tracking, Penalized and constrained LAD estimation in fixed and high dimension, A general family of trimmed estimators for robust high-dimensional data analysis, An analysis of penalized interaction models, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Minimum distance Lasso for robust high-dimensional regression, Testability of high-dimensional linear models with nonsparse structures, Adaptive log-density estimation, Exact post-selection inference, with application to the Lasso, Comprehensive comparative analysis and identification of RNA-binding protein domains: multi-class classification and feature selection, Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data, Regularized high dimension low tubal-rank tensor regression, Post-model-selection inference in linear regression models: an integrated review, Joint estimation of precision matrices in heterogeneous populations, Geometric inference for general high-dimensional linear inverse problems, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Sharp MSE bounds for proximal denoising, Distributed testing and estimation under sparse high dimensional models, Penalized least square in sparse setting with convex penalty and non Gaussian errors, Sub-optimality of some continuous shrinkage priors, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, A data-driven line search rule for support recovery in high-dimensional data analysis, Scalable methods for Bayesian selective inference, Trace regression model with simultaneously low rank and row(column) sparse parameter, Robust shrinkage estimation and selection for functional multiple linear model through LAD loss, Multivariate factorizable expectile regression with application to fMRI data, Gradient projection Newton pursuit for sparsity constrained optimization, Oracle inequalities for the lasso in the Cox model, The geometry of least squares in the 21st century, Simple bounds for recovering low-complexity models, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, The convex geometry of linear inverse problems, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Sharp recovery bounds for convex demixing, with applications, Stability of the elastic net estimator, Estimating sparse networks with hubs, Prediction error after model search, Robust machine learning by median-of-means: theory and practice, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, A two-step method for estimating high-dimensional Gaussian graphical models, A constrained \(\ell1\) minimization approach for estimating multiple sparse Gaussian or nonparanormal graphical models, Statistical analysis of sparse approximate factor models, Variable selection for sparse logistic regression, The Lasso problem and uniqueness, The log-linear group-lasso estimator and its asymptotic properties, Uniqueness conditions for low-rank matrix recovery, Is distribution-free inference possible for binary regression?, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, An analysis of the SPARSEVA estimate for the finite sample data case, Estimation of high-dimensional partially-observed discrete Markov random fields, High-dimensional grouped folded concave penalized estimation via the LLA algorithm, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, Generalized M-estimators for high-dimensional Tobit I models, Robust inference on average treatment effects with possibly more covariates than observations, Pathwise coordinate optimization for sparse learning: algorithm and theory, Decomposable norm minimization with proximal-gradient homotopy algorithm, Restricted strong convexity implies weak submodularity, The landscape of empirical risk for nonconvex losses, Regularization and the small-ball method. I: Sparse recovery, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, A Cluster Elastic Net for Multivariate Regression, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data, On asymptotically optimal confidence regions and tests for high-dimensional models, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Asymptotic properties on high-dimensional multivariate regression M-estimation, Regularized estimation in sparse high-dimensional time series models, Transfer Learning under High-dimensional Generalized Linear Models, Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness, Confidence intervals for high-dimensional inverse covariance estimation, Consistent multiple changepoint estimation with fused Gaussian graphical models, A Note on Coding and Standardization of Categorical Variables in (Sparse) Group Lasso Regression, Graphical-model based high dimensional generalized linear models, Matrix optimization based Euclidean embedding with outliers, Fast global convergence of gradient methods for high-dimensional statistical recovery, An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Integrative methods for post-selection inference under convex constraints, The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy, Sampling from non-smooth distributions through Langevin diffusion, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Inference for high-dimensional varying-coefficient quantile regression, Sparse regression for extreme values, The finite sample properties of sparse M-estimators with pseudo-observations, Asymptotic linear expansion of regularized M-estimators, Quantile regression feature selection and estimation with grouped variables using Huber approximation, A Lagrange-Newton algorithm for sparse nonlinear programming, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, High-performance statistical computing in the computing environments of the 2020s, On the grouping effect of the \(l_{1-2}\) models, High dimensional generalized linear models for temporal dependent data, On solutions of sparsity constrained optimization, Robust transfer learning of high-dimensional generalized linear model, Multiple change points detection in high-dimensional multivariate regression, Generalized linear models with structured sparsity estimators, Rejoinder to “Reader reaction to ‘Outcome‐adaptive Lasso: Variable selection for causal inference’ by Shortreed and Ertefaie (2017)”, Multivariate functional response low‐rank regression with an application to brain imaging data, Oracle inequality for sparse trace regression models with exponential \(\beta\)-mixing errors, Inference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependence, High-Dimensional Gaussian Graphical Regression Models with Covariates, Concave Likelihood-Based Regression with Finite-Support Response Variables, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, The rate of convergence for sparse and low-rank quantile trace regression, Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression, Integrative sparse reduced-rank regression via orthogonal rotation for analysis of high-dimensional multi-source data, Sparse Laplacian shrinkage for nonparametric transformation survival model, Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension, Correlation Tensor Decomposition and Its Application in Spatial Imaging Data, Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage, Profile GMM estimation of panel data models with interactive fixed effects, A Likelihood-Based Approach for Multivariate Categorical Response Regression in High Dimensions, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, Rate-optimal robust estimation of high-dimensional vector autoregressive models, Robust matrix estimations meet Frank-Wolfe algorithm, Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements, A convex-Nonconvex strategy for grouped variable selection, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, The nonparametric Box-Cox model for high-dimensional regression analysis, Multi-Task Learning with High-Dimensional Noisy Images, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, A network Lasso model for regression, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, Two-stage communication-efficient distributed sparse M-estimation with missing data, Variable selection and regularization via arbitrary rectangle-range generalized elastic net, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, The Lasso with general Gaussian designs with applications to hypothesis testing, Carving model-free inference, Envelopes and principal component regression, Multiple Change Point Detection in Reduced Rank High Dimensional Vector Autoregressive Models, Approximate Selective Inference via Maximum Likelihood, Statistical performance of quantile tensor regression with convex regularization, Expectile trace regression via low-rank and group sparsity regularization, Distributed estimation and inference for spatial autoregression model with large scale networks, High-dimensional functional graphical model structure learning via neighborhood selection approach, A joint estimation for the high-dimensional regression modeling on stratified data


Uses Software


Cites Work