An improved GLMNET for L1-regularized logistic regression
From MaRDI portal
Recommendations
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
- An interior-point method for large-scale \(l_1\)-regularized logistic regression
- Natural coordinate descent algorithm for \(\ell_1\)-penalised regression in generalised linear models
- Dual coordinate descent methods for logistic regression and maximum entropy models
- A coordinate majorization descent algorithm for \(\ell_1\) penalized learning
Cited in
(31)- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- A multilevel framework for sparse optimization with application to inverse covariance estimation and logistic regression
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property
- An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems
- A distributed block coordinate descent method for training \(l_1\) regularized linear classifiers
- Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification
- An inexact successive quadratic approximation method for L-1 regularized optimization
- Sparse approximations with interior point methods
- A centroid-based gene selection method for microarray data classification
- An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets
- Lasso regularization within the LocalGLMnet architecture
- An effective procedure for feature subset selection in logistic regression based on information criteria
- scientific article; zbMATH DE number 7400716 (Why is no real title available?)
- A globally convergent proximal Newton-type method in nonsmooth convex optimization
- A multicriteria approach to find predictive and sparse models with stable feature selection for high-dimensional data
- Concave Likelihood-Based Regression with Finite-Support Response Variables
- Fused multiple graphical lasso
- A guide for sparse PCA: model comparison and applications
- A fast SVD-hidden-nodes based extreme learning machine for large-scale data analytics
- A modified local quadratic approximation algorithm for penalized optimization problems
- scientific article; zbMATH DE number 7370606 (Why is no real title available?)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Feature selection and tumor classification for microarray data using relaxed Lasso and generalized multi-class support vector machine
- A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer
- A new large-scale learning algorithm for generalized additive models
- Empirical risk minimization: probabilistic complexity and stepsize strategy
- scientific article; zbMATH DE number 7306914 (Why is no real title available?)
- FarRSA for \(\ell_1\)-regularized convex optimization: local convergence and numerical experience
- A reduced-space algorithm for minimizing \(\ell_1\)-regularized convex functions
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- scientific article; zbMATH DE number 7370540 (Why is no real title available?)
This page was built for publication: An improved GLMNET for L1-regularized logistic regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5405181)