Saga
From MaRDI portal
Software:55377
swMATH39677MaRDI QIDQ55377FDOQ55377
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Accelerating incremental gradient optimization with curvature information
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Title not available (Why is that?)
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- Stochastic nested variance reduction for nonconvex optimization
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Second-order stochastic optimization for machine learning in linear time
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- A globally convergent incremental Newton method
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Title not available (Why is that?)
- Fastest rates for stochastic mirror descent methods
- Stochastic proximal linear method for structured non-convex problems
- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
- A tight bound of hard thresholding
- Catalyst acceleration for first-order convex optimization: from theory to practice
- A distributed flexible delay-tolerant proximal gradient algorithm
- Accelerated methods for nonconvex optimization
- Stochastic reformulations of linear systems: algorithms and convergence theory
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Deep relaxation: partial differential equations for optimizing deep neural networks
- An optimal randomized incremental gradient method
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Accelerated stochastic variance reduction for a class of convex optimization problems
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Efficient first-order methods for convex minimization: a constructive approach
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Accelerating variance-reduced stochastic gradient methods
- Random gradient extrapolation for distributed and stochastic optimization
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Global convergence rate of proximal incremental aggregated gradient methods
- Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation
- On stochastic mirror descent with interacting particles: convergence properties and variance reduction
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Inexact proximal stochastic gradient method for convex composite optimization
- Title not available (Why is that?)
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Why random reshuffling beats stochastic gradient descent
- Convergence of stochastic proximal gradient algorithm
- Stochastic optimization using a trust-region method and random models
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Adaptive sampling strategies for stochastic optimization
- Convergence rate of incremental gradient and incremental Newton methods
- Forward-Backward-Half Forward Algorithm for Solving Monotone Inclusions
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- A Newton Frank-Wolfe method for constrained self-concordant minimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Modern regularization methods for inverse problems
- Title not available (Why is that?)
- A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Asymptotic optimality in stochastic optimization
- Trimmed statistical estimation via variance reduction
- Stochastic variance-reduced cubic regularization methods
- Title not available (Why is that?)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- A linearly convergent stochastic recursive gradient method for convex optimization
- Laplacian smoothing gradient descent
- Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport
- A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Title not available (Why is that?)
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Some limit properties of Markov chains induced by recursive stochastic algorithms
- Title not available (Why is that?)
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
- Statistics of robust optimization: a generalized empirical likelihood approach
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- Block layer decomposition schemes for training deep neural networks
- A class of parallel doubly stochastic algorithms for large-scale learning
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Forward-reflected-backward method with variance reduction
- Quasi-Newton methods for machine learning: forget the past, just sample
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Accelerating mini-batch SARAH by step size rules
- Variance reduction for dependent sequences with applications to stochastic gradient MCMC
- A new homotopy proximal variable-metric framework for composite convex minimization
- Accelerated dual-averaging primal–dual method for composite convex minimization
This page was built for software: Saga