Pages that link to "Item:Q2438760"
From MaRDI portal
The following pages link to Calibrating nonconvex penalized regression in ultra-high dimension (Q2438760):
Displaying 50 items.
- Global solutions to folded concave penalized nonconvex learning (Q282459) (← links)
- Designing penalty functions in high dimensional problems: the role of tuning parameters (Q309586) (← links)
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems (Q482875) (← links)
- Tuning parameter selection for the adaptive LASSO in the autoregressive model (Q526980) (← links)
- Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model (Q670138) (← links)
- Stable portfolio selection strategy for mean-variance-CVaR model under high-dimensional scenarios (Q783138) (← links)
- An alternating direction method of multipliers for MCP-penalized regression with high-dimensional data (Q1633879) (← links)
- Variable selection via generalized SELO-penalized linear regression models (Q1640691) (← links)
- Variable selection and parameter estimation with the Atan regularization method (Q1658121) (← links)
- Homogeneity detection for the high-dimensional generalized linear model (Q1658352) (← links)
- Moderately clipped Lasso (Q1663146) (← links)
- A doubly sparse approach for group variable selection (Q1680797) (← links)
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions (Q1683689) (← links)
- A systematic review on model selection in high-dimensional regression (Q1726155) (← links)
- Portal nodes screening for large scale social networks (Q1740287) (← links)
- Review: Reversed low-rank ANOVA model for transforming high dimensional genetic data into low dimension (Q1740304) (← links)
- Pathwise coordinate optimization for sparse learning: algorithm and theory (Q1747736) (← links)
- DC programming and DCA: thirty years of developments (Q1749443) (← links)
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error (Q1750288) (← links)
- Minimum average variance estimation with group Lasso for the multivariate response central mean subspace (Q2034465) (← links)
- A unified primal dual active set algorithm for nonconvex sparse recovery (Q2038299) (← links)
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses (Q2091840) (← links)
- High-dimensional linear regression with hard thresholding regularization: theory and algorithm (Q2097492) (← links)
- High-dimensional variable screening through kernel-based conditional mean dependence (Q2112254) (← links)
- \(\ell_0\)-regularized high-dimensional accelerated failure time model (Q2129574) (← links)
- On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin (Q2131914) (← links)
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee (Q2135875) (← links)
- A data-driven line search rule for support recovery in high-dimensional data analysis (Q2157522) (← links)
- A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations (Q2163076) (← links)
- Nonconcave penalized estimation in sparse vector autoregression model (Q2180066) (← links)
- Test of significance for high-dimensional longitudinal data (Q2215753) (← links)
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming (Q2330643) (← links)
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary (Q2330649) (← links)
- Calibrating nonconvex penalized regression in ultra-high dimension (Q2438760) (← links)
- A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression (Q2691265) (← links)
- A Necessary Condition for the Strong Oracle Property (Q2815602) (← links)
- (Q4558147) (← links)
- Hard thresholding regression (Q4629285) (← links)
- An ADMM with continuation algorithm for non-convex SICA-penalized regression in high dimensions (Q4960646) (← links)
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO (Q4999164) (← links)
- (Q5004058) (← links)
- Sparse graphical models via calibrated concave convex procedure with application to fMRI data (Q5037034) (← links)
- Variable Selection With Second-Generation <i>P</i>-Values (Q5050808) (← links)
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks (Q5060495) (← links)
- Hard Thresholding Regularised Logistic Regression: Theory and Algorithms (Q5061727) (← links)
- Least-Square Approximation for a Distributed System (Q5066485) (← links)
- A polynomial algorithm for best-subset selection problem (Q5073242) (← links)
- An improved algorithm for high-dimensional continuous threshold expectile model with variance heterogeneity (Q5083335) (← links)
- A primal dual active set with continuation algorithm for high-dimensional nonconvex SICA-penalized regression (Q5107360) (← links)
- A Tuning-free Robust and Efficient Approach to High-dimensional Regression (Q5146020) (← links)