swMATH39677MaRDI QIDQ55377FDOQ55377
Author name not available (Why is that?)
Official website: https://paperswithcode.com/paper/saga-a-fast-incremental-gradient-method-with
Source code repository: https://github.com/adefazio/point-saga
Cited In (only showing first 100 items - show all)
- Title not available (Why is that?)
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Title not available (Why is that?)
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- Statistics of robust optimization: a generalized empirical likelihood approach
- Block layer decomposition schemes for training deep neural networks
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Second-order stochastic optimization for machine learning in linear time
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Title not available (Why is that?)
- Accelerating mini-batch SARAH by step size rules
- Variance reduction for dependent sequences with applications to stochastic gradient MCMC
- Stochastic proximal linear method for structured non-convex problems
- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
- Stochastic reformulations of linear systems: algorithms and convergence theory
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Accelerated stochastic variance reduction for a class of convex optimization problems
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
- Finite-sum smooth optimization with SARAH
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Accelerating variance-reduced stochastic gradient methods
- Random gradient extrapolation for distributed and stochastic optimization
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Global convergence rate of proximal incremental aggregated gradient methods
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Inexact proximal stochastic gradient method for convex composite optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Why random reshuffling beats stochastic gradient descent
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Katyusha: the first direct acceleration of stochastic gradient methods
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- ConstrainedLasso
- LS-MCMC
- IMPALA
- Modern regularization methods for inverse problems
- DILAND
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- Inexact SARAH algorithm for stochastic optimization
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Multilevel composite stochastic optimization via nested variance reduction
- Bregman Finito/MISO for nonconvex regularized finite sum minimization without Lipschitz gradient continuity
- SpiderBoost
- Stochastic variance-reduced cubic regularization methods
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Laplacian smoothing gradient descent
- Optimization for deep learning: an overview
- Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport
- High-performance statistical computing in the computing environments of the 2020s
- Title not available (Why is that?)
- Adaptive sampling for incremental optimization using stochastic gradient descent
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Accelerating incremental gradient optimization with curvature information
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- Stochastic nested variance reduction for nonconvex optimization
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A globally convergent incremental Newton method
- Fastest rates for stochastic mirror descent methods
- A tight bound of hard thresholding
- Catalyst acceleration for first-order convex optimization: from theory to practice
- A distributed flexible delay-tolerant proximal gradient algorithm
- Accelerated methods for nonconvex optimization
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Deep relaxation: partial differential equations for optimizing deep neural networks
- An optimal randomized incremental gradient method
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Efficient first-order methods for convex minimization: a constructive approach
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation
- COFFIN
- LIBSVM
- UNLocBoX
- On stochastic mirror descent with interacting particles: convergence properties and variance reduction
- Pegasos
- iPiano
- QUIC
- Jellyfish
- SSVM
- iPiasco
- CYCLADES
- MLbase
This page was built for software: Saga