Pathwise coordinate optimization for sparse learning: algorithm and theory (Q1747736): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Set OpenAlex properties.
 
(7 intermediate revisions by 5 users not shown)
Property / describes a project that uses
 
Property / describes a project that uses: sparsenet / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: huge / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: camel / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1412.7477 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Simultaneous analysis of Lasso and Dantzig selector / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties / rank
 
Normal rank
Property / cites work
 
Property / cites work: Strong oracle optimality of folded concave penalized estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Pathwise coordinate optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3998409 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5744812 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the complexity analysis of randomized block-coordinate descent methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the convergence of the coordinate descent method for convex differentiable minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: <i>SparseNet</i>: Coordinate Descent With Nonconvex Penalties / rank
 
Normal rank
Property / cites work
 
Property / cites work: High-dimensional graphs and variable selection with the Lasso / rank
 
Normal rank
Property / cites work
 
Property / cites work: A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gradient methods for minimizing composite functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2896143 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5396661 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Strong Rules for Discarding Predictors in Lasso-Type Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Calibrating nonconvex penalized regression in ultra-high dimension / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal computational and statistical rates of convergence for sparse nonconvex learning problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nearly unbiased variable selection under minimax concave penalty / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-stage convex relaxation for feature selection / rank
 
Normal rank
Property / cites work
 
Property / cites work: The sparsity and bias of the LASSO selection in high-dimensional linear regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Pathwise coordinate optimization for sparse learning: algorithm and theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3174050 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The huge Package for High-dimensional Undirected Graph Estimation in R / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Adaptive Lasso and Its Oracle Properties / rank
 
Normal rank
Property / cites work
 
Property / cites work: One-step sparse estimates in nonconcave penalized likelihood models / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2963994662 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 09:50, 30 July 2024

scientific article
Language Label Description Also known as
English
Pathwise coordinate optimization for sparse learning: algorithm and theory
scientific article

    Statements

    Pathwise coordinate optimization for sparse learning: algorithm and theory (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    27 April 2018
    0 references
    In the modeling of high-dimensional data with number of variables greatly exceeding the sample size, it is often assumed that only a small subset of variables are relevant, and various regularized approaches have been proposed to estimate the regression coefficients for these variables. Specific optimization algorithms have been developed to find these estimates and, among them, the pathwise coordinate optimization framework has been shown to have a good empirical performance, but no theoretical guarantee has been established. In this paper, a new pathwise calibrated sparse shooting algorithm is proposed to improve the existing pathwise coordinate optimization framework and shown to attain a linear convergence with optimal statistical properties in parameter estimation and support recovery. The proposed algorithm is also compared with other related algorithms in numerical experiments.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    nonconvex sparse learning
    0 references
    pathwise coordinate optimization
    0 references
    global linear convergence
    0 references
    optimal statistical rates of convergence
    0 references
    oracle property
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references