On constrained and regularized high-dimensional regression
From MaRDI portal
Publication:380022
DOI10.1007/s10463-012-0396-3zbMath1329.62307OpenAlexW2005894982WikidataQ41885326 ScholiaQ41885326MaRDI QIDQ380022
Hui Zhou, Wei Pan, Yunzhang Zhu, Xiaotong Shen
Publication date: 11 November 2013
Published in: Annals of the Institute of Statistical Mathematics (Search for Journal in Brave)
Full work available at URL: http://europepmc.org/articles/pmc3898843
\((p,n)\) versus fixed \(p\)-asymptoticsconstrained regressiondifference convex programmingnonconvex regularizationparameter and nonparametric models
Nonparametric regression and quantile regression (62G08) Parametric inference under constraints (62F30) General nonlinear regression (62J02) Statistical ranking and selection procedures (62F07)
Related Items
Efficient kernel-based variable selection with sparsistency, Best subset selection via a modern optimization lens, Designing penalty functions in high dimensional problems: the role of tuning parameters, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, Simultaneous feature selection and outlier detection with optimality guarantees, Learning sparse nonlinear dynamics via mixed-integer optimization, Causal Inference in Transcriptome-Wide Association Studies with Invalid Instruments and GWAS Summary Data, Sparse regression for low-dimensional time-dynamic varying coefficient models with application to air quality data, Unnamed Item, Structure learning via unstructured kernel-based M-estimation, Supervised homogeneity fusion: a combinatorial approach, Subset Selection and the Cone of Factor-Width-k Matrices, Reconstruction of a directed acyclic graph with intervention, On High-Dimensional Constrained Maximum Likelihood Inference, An efficient optimization approach for best subset selection in linear regression, with application to model selection and fitting in autoregressive time-series, Broken adaptive ridge regression and its asymptotic properties, High-dimensional sign-constrained feature selection and grouping, OR Forum—An Algorithmic Approach to Linear Regression, Learning sparse conditional distribution: an efficient kernel-based approach, An effective procedure for feature subset selection in logistic regression based on information criteria, \(\ell_{2,0}\)-norm based selection and estimation for multivariate generalized linear models, Robust subset selection, New bounds for subset selection from conic relaxations, Weighted thresholding homotopy method for sparsity constrained optimization, Unnamed Item
Uses Software
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- The Adaptive Lasso and Its Oracle Properties
- One-step sparse estimates in nonconcave penalized likelihood models
- Estimating the dimension of a model
- Local asymptotics for regression splines and confidence regions
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- Extended Bayesian information criteria for model selection with large model spaces
- Variational Analysis
- An asymptotic property of model selection criteria
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Likelihood-Based Selection and Sharp Parameter Estimation
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Smoothly Clipped Absolute Deviation on High Dimensions
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item