Hard Thresholding Regularised Logistic Regression: Theory and Algorithms
From MaRDI portal
Publication:5061727
DOI10.4208/eajam.110121.210621zbMath1481.62043OpenAlexW3216777167MaRDI QIDQ5061727
Chang Zhu, Lican Kang, Yuan Luo, Yan Yan Liu
Publication date: 14 March 2022
Published in: Unnamed Author (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.4208/eajam.110121.210621
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Gradient methods for minimizing composite functions
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Wavelet methods in statistics: some recent developments and their applications
- A unified primal dual active set algorithm for nonconvex sparse recovery
- High-dimensional generalized linear models and the lasso
- Calibrating nonconvex penalized regression in ultra-high dimension
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sequential Lasso Cum EBIC for Feature Selection With Ultra-High Dimensional Feature Space
- L 1-Regularization Path Algorithm for Generalized Linear Models
- Regularization and Variable Selection Via the Elastic Net
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Hard Thresholding Regularised Logistic Regression: Theory and Algorithms