Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization (Q2696917): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import241208061232 (talk | contribs)
Normalize DOI.
 
(8 intermediate revisions by 7 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s10589-022-00432-5 / rank
Normal rank
 
Property / author
 
Property / author: Xin-Wei Liu / rank
Normal rank
 
Property / author
 
Property / author: Xin-Wei Liu / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: MNIST / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10589-022-00432-5 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W4309746937 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimization Methods for Large-Scale Machine Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Stochastic Approximation Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated gradient methods for nonconvex nonlinear and stochastic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Subgradient Methods for Multi-Agent Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Convergence of Decentralized Gradient Descent / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast Distributed Gradient Methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Exact spectral-like gradient method for distributed optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Heavy-Ball: A Generalization and Acceleration of First-Order Methods With Gradient Tracking / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated Distributed Nesterov Gradient Descent / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Family of Distributed Momentum Methods Over Directed Graphs With Linear Convergence / rank
 
Normal rank
Property / cites work
 
Property / cites work: Linear Convergence Rate of a Class of Distributed Augmented Lagrangian Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Nonconvex Decentralized Gradient Descent / rank
 
Normal rank
Property / cites work
 
Property / cites work: Second-Order Guarantees of Distributed Gradient Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Influence of Bias-Correction on Distributed Stochastic Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed stochastic gradient tracking methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5149230 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Learning in Non-Convex Environments—Part I: Agreement at a Linear Rate / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Learning in Non-Convex Environments— Part II: Polynomial Escape From Saddle-Points / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3028166 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discrete-time dynamic average consensus / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3267900 / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S10589-022-00432-5 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 20:07, 19 December 2024

scientific article
Language Label Description Also known as
English
Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
scientific article

    Statements

    Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    17 April 2023
    0 references
    distributed non-convex optimization
    0 references
    machine learning
    0 references
    momentum methods
    0 references
    optimization algorithms
    0 references
    convergence rate
    0 references
    0 references
    0 references
    0 references

    Identifiers