Pages that link to "Item:Q5965310"
From MaRDI portal
The following pages link to A general theory of concave regularization for high-dimensional sparse estimation problems (Q5965310):
Displayed 50 items.
- Bayesian Bootstrap Spike-and-Slab LASSO (Q127195) (← links)
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models (Q152845) (← links)
- Global solutions to folded concave penalized nonconvex learning (Q282459) (← links)
- Best subset selection via a modern optimization lens (Q282479) (← links)
- Random subspace method for high-dimensional regression with the \texttt{R} package \texttt{regRSM} (Q311298) (← links)
- Oracle inequalities for the lasso in the Cox model (Q366963) (← links)
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems (Q482875) (← links)
- Penalized least squares estimation with weakly dependent data (Q525888) (← links)
- Tuning parameter selection for the adaptive LASSO in the autoregressive model (Q526980) (← links)
- Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model (Q670138) (← links)
- A theoretical understanding of self-paced learning (Q778415) (← links)
- Fitting sparse linear models under the sufficient and necessary condition for model identification (Q826666) (← links)
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls (Q830557) (← links)
- Nonconvex penalized reduced rank regression and its oracle properties in high dimensions (Q900821) (← links)
- Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty (Q900968) (← links)
- Distributed testing and estimation under sparse high dimensional models (Q1650081) (← links)
- Variable selection and parameter estimation with the Atan regularization method (Q1658121) (← links)
- Homogeneity detection for the high-dimensional generalized linear model (Q1658352) (← links)
- Principal components adjusted variable screening (Q1658427) (← links)
- The use of random-effect models for high-dimensional variable selection problems (Q1659014) (← links)
- Balanced estimation for high-dimensional measurement error models (Q1663093) (← links)
- Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression (Q1677029) (← links)
- A doubly sparse approach for group variable selection (Q1680797) (← links)
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions (Q1683689) (← links)
- Quantile regression for additive coefficient models in high dimensions (Q1686242) (← links)
- A two-stage regularization method for variable selection and forecasting in high-order interaction model (Q1723055) (← links)
- High-dimensional grouped folded concave penalized estimation via the LLA algorithm (Q1726165) (← links)
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error (Q1750288) (← links)
- Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity (Q2011725) (← links)
- Tractable ADMM schemes for computing KKT points and local minimizers for \(\ell_0\)-minimization problems (Q2026765) (← links)
- A unified primal dual active set algorithm for nonconvex sparse recovery (Q2038299) (← links)
- Iteratively reweighted \(\ell_1\)-penalized robust regression (Q2044416) (← links)
- Second-order Stein: SURE for SURE and other applications in high-dimensional inference (Q2054467) (← links)
- Dynamic variable selection with spike-and-slab process priors (Q2057381) (← links)
- Nonnegative estimation and variable selection under minimax concave penalty for sparse high-dimensional linear regression models (Q2066516) (← links)
- Smoothing Newton method for \(\ell^0\)-\(\ell^2\) regularized linear inverse problem (Q2072164) (← links)
- Weighted thresholding homotopy method for sparsity constrained optimization (Q2082209) (← links)
- Robust moderately clipped LASSO for simultaneous outlier detection and variable selection (Q2091331) (← links)
- High-dimensional linear regression with hard thresholding regularization: theory and algorithm (Q2097492) (← links)
- \(\ell_0\)-regularized high-dimensional accelerated failure time model (Q2129574) (← links)
- On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin (Q2131914) (← links)
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee (Q2135875) (← links)
- De-biasing the Lasso with degrees-of-freedom adjustment (Q2136990) (← links)
- Bias versus non-convexity in compressed sensing (Q2155168) (← links)
- Almost sure uniqueness of a global minimum without convexity (Q2176635) (← links)
- Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm (Q2190319) (← links)
- Subspace learning by \(\ell^0\)-induced sparsity (Q2193787) (← links)
- Matrix completion with nonconvex regularization: spectral operators and scalable algorithms (Q2195855) (← links)
- Estimation and inference for precision matrices of nonstationary time series (Q2215745) (← links)
- Robust low-rank multiple kernel learning with compound regularization (Q2239908) (← links)