DiSCO
From MaRDI portal
Cited in
(22)- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Distributed adaptive Newton methods with global superlinear convergence
- Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- Communication-efficient distributed statistical inference
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- Composite convex optimization with global and local inexact oracles
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Generalized self-concordant functions: a recipe for Newton-type methods
- scientific article; zbMATH DE number 7306895 (Why is no real title available?)
- A general distributed dual coordinate optimization framework for regularized loss minimization
- TernGrad
- ADD-OPT
- SGDLibrary
- AIDE
- DSCOVR
- ExtraPush
- AsySPA
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- DSCOVR: randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
This page was built for software: DiSCO