Pages that link to "Item:Q5965308"
From MaRDI portal
The following pages link to A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers (Q5965308):
Displaying 50 items.
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions (Q1683689) (← links)
- A constrained \(\ell1\) minimization approach for estimating multiple sparse Gaussian or nonparanormal graphical models (Q1698844) (← links)
- An analysis of the SPARSEVA estimate for the finite sample data case (Q1716452) (← links)
- High-dimensional grouped folded concave penalized estimation via the LLA algorithm (Q1726165) (← links)
- Pathwise coordinate optimization for sparse learning: algorithm and theory (Q1747736) (← links)
- Regularization and the small-ball method. I: Sparse recovery (Q1750281) (← links)
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error (Q1750288) (← links)
- The convex geometry of linear inverse problems (Q1928276) (← links)
- The Lasso problem and uniqueness (Q1951165) (← links)
- Restricted strong convexity implies weak submodularity (Q1990594) (← links)
- The landscape of empirical risk for nonconvex losses (Q1991675) (← links)
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations (Q2022171) (← links)
- Asymptotic properties on high-dimensional multivariate regression M-estimation (Q2022560) (← links)
- Consistent multiple changepoint estimation with fused Gaussian graphical models (Q2042434) (← links)
- Graphical-model based high dimensional generalized linear models (Q2044367) (← links)
- Matrix optimization based Euclidean embedding with outliers (Q2044470) (← links)
- An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems (Q2046332) (← links)
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning (Q2054498) (← links)
- Integrative methods for post-selection inference under convex constraints (Q2054531) (← links)
- The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy (Q2054532) (← links)
- Sampling from non-smooth distributions through Langevin diffusion (Q2065460) (← links)
- Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence (Q2067681) (← links)
- Inference for high-dimensional varying-coefficient quantile regression (Q2074309) (← links)
- Sparse regression for extreme values (Q2074318) (← links)
- The finite sample properties of sparse M-estimators with pseudo-observations (Q2075446) (← links)
- Asymptotic linear expansion of regularized M-estimators (Q2075454) (← links)
- Quantile regression feature selection and estimation with grouped variables using Huber approximation (Q2080351) (← links)
- A Lagrange-Newton algorithm for sparse nonlinear programming (Q2089792) (← links)
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses (Q2091840) (← links)
- High-performance statistical computing in the computing environments of the 2020s (Q2092893) (← links)
- On the grouping effect of the \(l_{1-2}\) models (Q2093808) (← links)
- High dimensional generalized linear models for temporal dependent data (Q2108473) (← links)
- Covariate-adjusted inference for differential analysis of high-dimensional networks (Q2121714) (← links)
- Penalized and constrained LAD estimation in fixed and high dimension (Q2122803) (← links)
- Testability of high-dimensional linear models with nonsparse structures (Q2131247) (← links)
- Adaptive log-density estimation (Q2131904) (← links)
- Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data (Q2137036) (← links)
- Regularized high dimension low tubal-rank tensor regression (Q2137811) (← links)
- Post-model-selection inference in linear regression models: an integrated review (Q2137823) (← links)
- Penalized least square in sparse setting with convex penalty and non Gaussian errors (Q2154605) (← links)
- A data-driven line search rule for support recovery in high-dimensional data analysis (Q2157522) (← links)
- Gradient projection Newton pursuit for sparsity constrained optimization (Q2168680) (← links)
- Estimating sparse networks with hubs (Q2196140) (← links)
- Prediction error after model search (Q2196193) (← links)
- Robust machine learning by median-of-means: theory and practice (Q2196199) (← links)
- Lasso guarantees for \(\beta \)-mixing heavy-tailed time series (Q2196212) (← links)
- A two-step method for estimating high-dimensional Gaussian graphical models (Q2197843) (← links)
- Statistical analysis of sparse approximate factor models (Q2199708) (← links)
- Variable selection for sparse logistic regression (Q2202033) (← links)
- Is distribution-free inference possible for binary regression? (Q2209818) (← links)