Cited in
(only showing first 100 items - show all)- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
- High-performance statistical computing in the computing environments of the 2020s
- Random gradient extrapolation for distributed and stochastic optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Multilevel composite stochastic optimization via nested variance reduction
- Inexact proximal stochastic gradient method for convex composite optimization
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Block layer decomposition schemes for training deep neural networks
- Statistics of robust optimization: a generalized empirical likelihood approach
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Bregman Finito/MISO for nonconvex regularized finite sum minimization without Lipschitz gradient continuity
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- scientific article; zbMATH DE number 7370629 (Why is no real title available?)
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Accelerated stochastic variance reduction for a class of convex optimization problems
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Laplacian smoothing gradient descent
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- scientific article; zbMATH DE number 7626720 (Why is no real title available?)
- Optimization for deep learning: an overview
- Modern regularization methods for inverse problems
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- Why random reshuffling beats stochastic gradient descent
- Stochastic variance-reduced cubic regularization methods
- Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
- Inexact SARAH algorithm for stochastic optimization
- Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport
- scientific article; zbMATH DE number 6982318 (Why is no real title available?)
- Katyusha: the first direct acceleration of stochastic gradient methods
- Accelerating mini-batch SARAH by step size rules
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Global convergence rate of proximal incremental aggregated gradient methods
- Stochastic reformulations of linear systems: algorithms and convergence theory
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets
- APriD
- ConstrainedLasso
- LS-MCMC
- IMPALA
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks
- DILAND
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Variance reduction for dependent sequences with applications to stochastic gradient MCMC
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
- Adaptive sampling for incremental optimization using stochastic gradient descent
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- SpiderBoost
- Finite-sum smooth optimization with SARAH
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Accelerating variance-reduced stochastic gradient methods
- Second-order stochastic optimization for machine learning in linear time
- Stochastic proximal linear method for structured non-convex problems
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- PyCUTEst
- PrePDHG
- scientific article; zbMATH DE number 7415113 (Why is no real title available?)
- A class of parallel doubly stochastic algorithms for large-scale learning
- Choose your path wisely: gradient descent in a Bregman distance framework
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- Multilevel stochastic gradient methods for nested composition optimization
- Recent Advances in Stochastic Riemannian Optimization
- On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- scientific article; zbMATH DE number 7307474 (Why is no real title available?)
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- On the adaptivity of stochastic gradient-based optimization
- Optimal transport-based distributionally robust optimization: structural properties and iterative schemes
- Variance reduction for root-finding problems
- A new homotopy proximal variable-metric framework for composite convex minimization
- Real-time decoding of attentional states using closed-loop EEG neurofeedback
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Some limit properties of Markov chains induced by recursive stochastic algorithms
- Forward-reflected-backward method with variance reduction
- A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
- Stochastic learning approach for binary optimization: application to Bayesian optimal design of experiments
- Accelerated dual-averaging primal–dual method for composite convex minimization
- Accelerated proximal incremental algorithm schemes for non-strongly convex functions
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Stochastic distributed learning with gradient quantization and double-variance reduction
- A randomized incremental primal-dual method for decentralized consensus optimization
- Linear convergence of cyclic SAGA
- An adaptive gradient method with energy and momentum
- Stochastic sub-sampled Newton method with variance reduction
This page was built for software: Saga