Pathwise coordinate optimization
From MaRDI portal
Publication:2466463
DOI10.1214/07-AOAS131zbMATH Open1378.90064arXiv0708.1485MaRDI QIDQ2466463FDOQ2466463
Authors: Holger Höfling, Robert Tibshirani, Jerome H. Friedman, Trevor Hastie
Publication date: 15 January 2008
Published in: The Annals of Applied Statistics (Search for Journal in Brave)
Abstract: We consider ``one-at-a-time coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the -penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fused lasso, however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
Full work available at URL: https://arxiv.org/abs/0708.1485
Recommendations
Cites Work
- Nonlinear total variation based noise removal algorithms
- Least angle regression. (With discussion)
- Title not available (Why is that?)
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Atomic Decomposition by Basis Pursuit
- Sparsity and Smoothness Via the Fused Lasso
- Regularization and Variable Selection Via the Elastic Net
- Model Selection and Estimation in Regression with Grouped Variables
- Better Subset Regression Using the Nonnegative Garrote
- A new approach to variable selection in least squares problems
- Spatial smoothing and hot spot detection for CGH data using the fused lasso
- Title not available (Why is that?)
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- The Explicit Inverse of a Tridiagonal Matrix
Cited In (only showing first 100 items - show all)
- Learning Oncogenic Pathways from Binary Genomic Instability Data
- Simultaneous estimation of quantile regression functions using B-splines and total variation penalty
- A coordinate descent method for total variation minimization
- Usage of the GO estimator in high dimensional linear models
- Fusion learning algorithm to combine partially heterogeneous Cox models
- Block decomposition methods for total variation by primal-dual stitching
- On faster convergence of cyclic block coordinate descent-type methods for strongly convex minimization
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property
- On efficiently solving the subproblems of a level-set method for fused lasso problems
- Hyperspectral image fusion with a new hybrid regularization
- A new scope of penalized empirical likelihood with high-dimensional estimating equations
- Convex and non-convex regularization methods for spatial point processes intensity estimation
- Exact post-selection inference for the generalized Lasso path
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Sparse principal component analysis via fractional function regularity
- Solution paths for the generalized Lasso with applications to spatially varying coefficients regression
- Partial penalized empirical likelihood ratio test under sparse case
- Double fused Lasso regularized regression with both matrix and vector valued predictors
- Shrinkage estimation and variable selection in multiple regression models with random coefficient autoregressive errors
- Estimation of ARMAX processes with noise corrupted output signal observations
- A new double-regularized regression using Liu and Lasso regularization
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Robust variable selection for finite mixture regression models
- Stable prediction in high-dimensional linear models
- Additive regression splines with total variation and non negative garrote penalties
- Cyclic coordinate-update algorithms for fixed-point problems: analysis and applications
- Outlier detection under a covariate-adjusted exponential regression model with censored data
- Variable selection in partially linear additive hazards model with grouped covariates and a diverging number of parameters
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Penalized and Constrained Optimization: An Application to High-Dimensional Website Advertising
- Hierarchically penalized additive hazards model with diverging number of parameters
- Inexact proximal stochastic second-order methods for nonconvex composite optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems
- Improving the incoherence of a learned dictionary via rank shrinkage
- Robust network-based analysis of the associations between (epi)genetic measurements
- Generalized additive partial linear models with high-dimensional covariates
- Title not available (Why is that?)
- Dictionary learning based on nonnegative matrix factorization using parallel coordinate descent
- Logistic regression with weight grouping priors
- A doubly sparse approach for group variable selection
- Advanced algorithms for penalized quantile and composite quantile regression
- Model selection with distributed SCAD penalty
- Algorithm for overcoming the curse of dimensionality for time-dependent non-convex Hamilton-Jacobi equations arising from optimal control and differential games problems
- Paths Following Algorithm for Penalized Logistic Regression Using SCAD and MCP
- Genetic algorithm versus classical methods in sparse index tracking
- A plug-in approach to sparse and robust principal component analysis
- Variable selection and estimation using a continuous approximation to the \(L_0\) penalty
- Regression with adaptive Lasso and correlation based penalty
- Monte Carlo simulation for Lasso-type problems by estimator augmentation
- Fast state-space methods for inferring dendritic synaptic connectivity
- On the total variation regularized estimator over a class of tree graphs
- Generalized methods and solvers for noise removal from piecewise constant signals. II: New methods
- Improving Sales Forecasting Accuracy: A Tensor Factorization Approach with Demand Awareness
- Dictionary-based image denoising by fused-lasso atom selection
- Sparse-smooth regularized singular value decomposition
- Regression with outlier shrinkage
- Inference for low‐ and high‐dimensional inhomogeneous Gibbs point processes
- A modified adaptive Lasso for identifying interactions in the Cox model with the heredity constraint
- Regularized 3D functional regression for brain image data via Haar wavelets
- Regression on manifolds: estimation of the exterior derivative
- Surveying and comparing simultaneous sparse approximation (or group-lasso) algorithms
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- A generic coordinate descent solver for non-smooth convex optimisation
- The use of random-effect models for high-dimensional variable selection problems
- A multilevel framework for sparse optimization with application to inverse covariance estimation and logistic regression
- Adaptive log-density estimation
- The MM alternative to EM
- Group coordinate descent algorithms for nonconvex penalized regression
- Coordinate ascent for penalized semiparametric regression on high-dimensional panel count data
- Sparse regulatory networks
- Structure identification in panel data analysis
- Title not available (Why is that?)
- Sparse low-rank separated representation models for learning from data
- Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator
- The use of vector bootstrapping to improve variable selection precision in Lasso models
- Screening-based Bregman divergence estimation with NP-dimensionality
- Improved variable selection with forward-lasso adaptive shrinkage
- ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
- Transductive versions of the Lasso and the Dantzig selector
- Group structure detection for a high‐dimensional panel data model
- Robust and adaptive algorithms for online portfolio selection
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- Nonstationary Modeling With Sparsity for Spatial Data via the Basis Graphical Lasso
- SPADES and mixture models
- Solution path clustering with adaptive concave penalty
- Sparse and robust normal and \(t\)-portfolios by penalized \(L_q\)-likelihood minimization
- Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee
- Block coordinate descent algorithms for large-scale sparse multiclass classification
- SICA for Cox's proportional hazards model with a diverging number of parameters
- Feature selection guided by structural information
- Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization
- Penalized variable selection in competing risks regression
- Multicategory vertex discriminant analysis for high-dimensional data
- Random Coordinate Descent Methods for Nonseparable Composite Optimization
- Marginalized Lasso in sparse regression
- Discussion: ``A significance test for the lasso
Uses Software
This page was built for publication: Pathwise coordinate optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2466463)