Saga
From MaRDI portal
swMATH39677MaRDI QIDQ55377FDOQ55377
Author name not available (Why is that?)
Official website: https://paperswithcode.com/paper/saga-a-fast-incremental-gradient-method-with
Source code repository: https://github.com/adefazio/point-saga
Cited In (only showing first 100 items - show all)
- Accelerating incremental gradient optimization with curvature information
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- Stochastic nested variance reduction for nonconvex optimization
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A globally convergent incremental Newton method
- Fastest rates for stochastic mirror descent methods
- A tight bound of hard thresholding
- Catalyst acceleration for first-order convex optimization: from theory to practice
- A distributed flexible delay-tolerant proximal gradient algorithm
- Accelerated methods for nonconvex optimization
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Deep relaxation: partial differential equations for optimizing deep neural networks
- An optimal randomized incremental gradient method
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Efficient first-order methods for convex minimization: a constructive approach
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation
- COFFIN
- LIBSVM
- UNLocBoX
- On stochastic mirror descent with interacting particles: convergence properties and variance reduction
- Pegasos
- iPiano
- QUIC
- Jellyfish
- SSVM
- iPiasco
- CYCLADES
- MLbase
- ARock
- SGD-QN
- BADMM
- CLTune
- BCI2000
- BCILAB
- Brainstorm
- IMRO
- Title not available (Why is that?)
- OpenViBE
- CNTK
- PESTO
- AdaGrad
- RMSprop
- TernGrad
- PhaseMax
- lightning
- SGDLibrary
- AIDE
- CNN-RNN
- DSCOVR
- D-ADMM
- DiSCO
- HOGWILD
- NESUN
- FROSTT
- Cyanure
- blockSQP
- ciag
- SCAFFOLD
- SBEED
- AsySPA
- ProxSARAH
- Celer
- CIL
- FinRL
- BLITZ
- Convergence of stochastic proximal gradient algorithm
- Finito
- BenchOpt
- SARGE
- ADADELTA
- NFBLab
- NC-OPT
- adaQN
- Stochastic optimization using a trust-region method and random models
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Adaptive sampling strategies for stochastic optimization
- Convergence rate of incremental gradient and incremental Newton methods
- Forward-Backward-Half Forward Algorithm for Solving Monotone Inclusions
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- SONIA
- LaplacianSmoothing-GradientDescent
- Alpaqa
- APriD
- A Newton Frank-Wolfe method for constrained self-concordant minimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Title not available (Why is that?)
- A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization
- Asymptotic optimality in stochastic optimization
- Trimmed statistical estimation via variance reduction
- Title not available (Why is that?)
- A linearly convergent stochastic recursive gradient method for convex optimization
- A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Title not available (Why is that?)
- Adaptivity of stochastic gradient methods for nonconvex optimization
This page was built for software: Saga