Saga
From MaRDI portal
Software:55377
swMATH39677MaRDI QIDQ55377FDOQ55377
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Accelerating incremental gradient optimization with curvature information
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Title not available (Why is that?)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Adaptive Sampling Strategies for Stochastic Optimization
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- A globally convergent incremental Newton method
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Title not available (Why is that?)
- Fastest rates for stochastic mirror descent methods
- Stochastic proximal linear method for structured non-convex problems
- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Convergence Rate of Incremental Gradient and Incremental Newton Methods
- An optimal randomized incremental gradient method
- Accelerated stochastic variance reduction for a class of convex optimization problems
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Efficient first-order methods for convex minimization: a constructive approach
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Accelerating variance-reduced stochastic gradient methods
- Title not available (Why is that?)
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
- Trimmed Statistical Estimation via Variance Reduction
- A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation
- On stochastic mirror descent with interacting particles: convergence properties and variance reduction
- Title not available (Why is that?)
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Inexact proximal stochastic gradient method for convex composite optimization
- Title not available (Why is that?)
- Why random reshuffling beats stochastic gradient descent
- Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
- Convergence of stochastic proximal gradient algorithm
- Stochastic optimization using a trust-region method and random models
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Forward-Backward-Half Forward Algorithm for Solving Monotone Inclusions
- Title not available (Why is that?)
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- Optimization Methods for Large-Scale Machine Learning
- A Newton Frank-Wolfe method for constrained self-concordant minimization
- Modern regularization methods for inverse problems
- A Tight Bound of Hard Thresholding
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Accelerated Methods for NonConvex Optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Asymptotic optimality in stochastic optimization
- Title not available (Why is that?)
- Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
- A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints
- A Distributed Flexible Delay-Tolerant Proximal Gradient Algorithm
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- A linearly convergent stochastic recursive gradient method for convex optimization
- Laplacian smoothing gradient descent
- Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport
- A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization
- Title not available (Why is that?)
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Multilevel Stochastic Gradient Methods for Nested Composition Optimization
- Title not available (Why is that?)
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- Block layer decomposition schemes for training deep neural networks
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- Optimal Transport-Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- Forward-reflected-backward method with variance reduction
- Quasi-Newton methods for machine learning: forget the past, just sample
- Title not available (Why is that?)
- A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization
- Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- An Adaptive Gradient Method with Energy and Momentum
- On the Adaptivity of Stochastic Gradient-Based Optimization
This page was built for software: Saga