An accelerated communication-efficient primal-dual optimization framework for structured machine learning (Q5859008): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
Property / cites work
 
Property / cites work: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5396673 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated, Parallel, and Proximal Coordinate Descent / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed optimization with arbitrary local solvers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gradient methods for minimizing composite functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5638112 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4558572 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Communication-efficient distributed optimization of self-concordant empirical loss / rank
 
Normal rank
Property / cites work
 
Property / cites work: Algorithm 778: L-BFGS-B / rank
 
Normal rank

Revision as of 00:28, 25 July 2024

scientific article; zbMATH DE number 7333780
Language Label Description Also known as
English
An accelerated communication-efficient primal-dual optimization framework for structured machine learning
scientific article; zbMATH DE number 7333780

    Statements

    An accelerated communication-efficient primal-dual optimization framework for structured machine learning (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    15 April 2021
    0 references
    0 references
    nonlinear optimization
    0 references
    nonsmooth optimization
    0 references
    distributed optimization
    0 references
    machine learning
    0 references
    accelerated methods
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references