Cited in
(only showing first 100 items - show all)- A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Adaptive sampling for incremental optimization using stochastic gradient descent
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Some limit properties of Markov chains induced by recursive stochastic algorithms
- scientific article; zbMATH DE number 7626720 (Why is no real title available?)
- Accelerating incremental gradient optimization with curvature information
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- scientific article; zbMATH DE number 6982318 (Why is no real title available?)
- Block layer decomposition schemes for training deep neural networks
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
- Statistics of robust optimization: a generalized empirical likelihood approach
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- A class of parallel doubly stochastic algorithms for large-scale learning
- Stochastic nested variance reduction for nonconvex optimization
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- Forward-reflected-backward method with variance reduction
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Quasi-Newton methods for machine learning: forget the past, just sample
- Second-order stochastic optimization for machine learning in linear time
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- A globally convergent incremental Newton method
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Fastest rates for stochastic mirror descent methods
- Accelerating mini-batch SARAH by step size rules
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- scientific article; zbMATH DE number 7370629 (Why is no real title available?)
- Variance reduction for dependent sequences with applications to stochastic gradient MCMC
- Stochastic proximal linear method for structured non-convex problems
- A tight bound of hard thresholding
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
- A new homotopy proximal variable-metric framework for composite convex minimization
- Accelerated methods for nonconvex optimization
- A distributed flexible delay-tolerant proximal gradient algorithm
- Stochastic reformulations of linear systems: algorithms and convergence theory
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Accelerated dual-averaging primal–dual method for composite convex minimization
- An optimal randomized incremental gradient method
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Search direction correction with normalized gradient makes first-order methods faster
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets
- Stochastic learning approach for binary optimization: application to Bayesian optimal design of experiments
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- Efficient first-order methods for convex minimization: a constructive approach
- Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
- Finite-sum smooth optimization with SARAH
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- scientific article; zbMATH DE number 7307474 (Why is no real title available?)
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Accelerating variance-reduced stochastic gradient methods
- LASSO
- COFFIN
- LIBSVM
- UNLocBoX
- Pegasos
- iPiano
- QUIC
- Jellyfish
- SSVM
- iPiasco
- CYCLADES
- MLbase
- ARock
- SGD-QN
- BADMM
- CLTune
- BCI2000
- BCILAB
- Brainstorm
- IMRO
- OpenViBE
- CNTK
- PESTO
- AdaGrad
- RMSprop
- TernGrad
- PhaseMax
This page was built for software: Saga