The following pages link to Saga (Q55377):
Displaying 50 items.
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity (Q782446) (← links)
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications (Q829492) (← links)
- Stochastic optimization using a trust-region method and random models (Q1646570) (← links)
- Inexact proximal stochastic gradient method for convex composite optimization (Q1694394) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- An optimal randomized incremental gradient method (Q1785198) (← links)
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training (Q1790674) (← links)
- Stochastic variance reduced gradient methods using a trust-region-like scheme (Q1995995) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute'' gradient for structured convex optimization (Q2020608) (← links)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs (Q2020610) (← links)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (Q2023684) (← links)
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods (Q2031928) (← links)
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (Q2039235) (← links)
- Fastest rates for stochastic mirror descent methods (Q2044496) (← links)
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems (Q2045192) (← links)
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression (Q2057761) (← links)
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks (Q2063869) (← links)
- On stochastic mirror descent with interacting particles: convergence properties and variance reduction (Q2077867) (← links)
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms (Q2082232) (← links)
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization (Q2086938) (← links)
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm (Q2091216) (← links)
- High-performance statistical computing in the computing environments of the 2020s (Q2092893) (← links)
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization (Q2103421) (← links)
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems (Q2112682) (← links)
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity (Q2115253) (← links)
- Accelerating variance-reduced stochastic gradient methods (Q2118092) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- Accelerating mini-batch SARAH by step size rules (Q2127094) (← links)
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems (Q2133414) (← links)
- A Newton Frank-Wolfe method for constrained self-concordant minimization (Q2141726) (← links)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization (Q2149551) (← links)
- Finite-sum smooth optimization with SARAH (Q2149950) (← links)
- A stochastic primal-dual method for a class of nonconvex constrained optimization (Q2162528) (← links)
- Laplacian smoothing gradient descent (Q2168883) (← links)
- Block layer decomposition schemes for training deep neural networks (Q2173515) (← links)
- Accelerating incremental gradient optimization with curvature information (Q2181597) (← links)
- Linear convergence of cyclic SAGA (Q2193004) (← links)
- Efficient first-order methods for convex minimization: a constructive approach (Q2205976) (← links)
- Optimization for deep learning: an overview (Q2218095) (← links)
- Why random reshuffling beats stochastic gradient descent (Q2227529) (← links)
- A linearly convergent stochastic recursive gradient method for convex optimization (Q2228399) (← links)
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems (Q2230784) (← links)
- Forward-reflected-backward method with variance reduction (Q2231039) (← links)
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets (Q2282820) (← links)
- Accelerated proximal incremental algorithm schemes for non-strongly convex functions (Q2297863) (← links)
- Deep relaxation: partial differential equations for optimizing deep neural networks (Q2319762) (← links)
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference (Q2320597) (← links)
- Primal-dual stochastic distributed algorithm for constrained convex optimization (Q2334189) (← links)
- A globally convergent incremental Newton method (Q2349125) (← links)