Calibrating nonconvex penalized regression in ultra-high dimension
From MaRDI portal
Publication:2438760
DOI10.1214/13-AOS1159zbMath1281.62106arXiv1311.4981OpenAlexW3100058837WikidataQ33768098 ScholiaQ33768098MaRDI QIDQ2438760
Lan Wang, Yongdai Kim, Run-Ze Li
Publication date: 6 March 2014
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1311.4981
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Monte Carlo methods (65C05)
Related Items
Achieving the oracle property of OEM with nonconvex penalties, Sparse graphical models via calibrated concave convex procedure with application to fMRI data, Global solutions to folded concave penalized nonconvex learning, \(\ell_0\)-regularized high-dimensional accelerated failure time model, On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, Variable Selection With Second-Generation P-Values, An alternating direction method of multipliers for MCP-penalized regression with high-dimensional data, Designing penalty functions in high dimensional problems: the role of tuning parameters, Variable selection via generalized SELO-penalized linear regression models, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Hard Thresholding Regularised Logistic Regression: Theory and Algorithms, Least-Square Approximation for a Distributed System, A data-driven line search rule for support recovery in high-dimensional data analysis, Variable selection and parameter estimation with the Atan regularization method, Homogeneity detection for the high-dimensional generalized linear model, A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations, Moderately clipped Lasso, An improved algorithm for high-dimensional continuous threshold expectile model with variance heterogeneity, Nonconcave penalized estimation in sparse vector autoregression model, A doubly sparse approach for group variable selection, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Grouped variable selection with discrete optimization: computational and statistical perspectives, Variables selection using \(\mathcal{L}_0\) penalty, A primal dual active set with continuation algorithm for high-dimensional nonconvex SICA-penalized regression, Ultra-High Dimensional Quantile Regression for Longitudinal Data: An Application to Blood Pressure Analysis, Reprint: Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic, Statistical Learning for Individualized Asset Allocation, Projection Test for Mean Vector in High Dimensions, Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems, Unnamed Item, Estimations and Tests for Generalized Mediation Models with High-Dimensional Potential Mediators, Open issues and recent advances in DC programming and DCA, A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression, Calibrating nonconvex penalized regression in ultra-high dimension, Test of significance for high-dimensional longitudinal data, A systematic review on model selection in high-dimensional regression, Hard thresholding regression, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Targeted Random Projection for Prediction From High-Dimensional Features, Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model, Portal nodes screening for large scale social networks, Review: Reversed low-rank ANOVA model for transforming high dimensional genetic data into low dimension, Feature Screening for Network Autoregression Model, Pathwise coordinate optimization for sparse learning: algorithm and theory, DC programming and DCA: thirty years of developments, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Tuning parameter selection for the adaptive LASSO in the autoregressive model, A polynomial algorithm for best-subset selection problem, Unnamed Item, Minimum average variance estimation with group Lasso for the multivariate response central mean subspace, A unified primal dual active set algorithm for nonconvex sparse recovery, An ADMM with continuation algorithm for non-convex SICA-penalized regression in high dimensions, A Necessary Condition for the Strong Oracle Property, Stable portfolio selection strategy for mean-variance-CVaR model under high-dimensional scenarios, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, High-dimensional linear regression with hard thresholding regularization: theory and algorithm, High-dimensional variable screening through kernel-based conditional mean dependence
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Statistics for high-dimensional data. Methods, theory and applications.
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Convex analysis approach to d. c. programming: Theory, algorithms and applications
- Nonconcave penalized likelihood with a diverging number of parameters.
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Multi-stage convex relaxation for feature selection
- Calibrating nonconvex penalized regression in ultra-high dimension
- Variable selection using MM algorithms
- Large sample properties of the smoothly clipped absolute deviation penalized maximum likelihood estimation on high dimensions
- Global optimality of nonconvex penalized estimators
- Shrinkage Tuning Parameter Selection with a Diverging number of Parameters
- Extended Bayesian information criteria for model selection with large model spaces
- The Concave-Convex Procedure
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Statistical View of Some Chemometrics Regression Tools
- Regularization Parameter Selections via Generalized Information Criterion
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Smoothly Clipped Absolute Deviation on High Dimensions
- Tuning parameter selectors for the smoothly clipped absolute deviation method
- A general theory of concave regularization for high-dimensional sparse estimation problems