Minimizing finite sums with the stochastic average gradient

From MaRDI portal
Revision as of 07:25, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:517295

DOI10.1007/s10107-016-1030-6zbMath1358.90073arXiv1309.2388OpenAlexW2963156201MaRDI QIDQ517295

Nicolas Le Roux, Mark Schmidt, Francis Bach

Publication date: 23 March 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1309.2388




Related Items (only showing first 100 items - show all)

Some Limit Properties of Markov Chains Induced by Recursive Stochastic AlgorithmsStochastic accelerated alternating direction method of multipliers with importance samplingBlock-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problemsAn Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk MinimizationGADMM: Fast and Communication Efficient Framework for Distributed Machine LearningGeneral framework for binary classification on top samplesQuasi-Newton methods for machine learning: forget the past, just sampleA stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationFinite-sum smooth optimization with SARAHStochastic Learning Approach for Binary Optimization: Application to Bayesian Optimal Design of ExperimentsComplexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parametersAdaptive Sampling Strategies for Stochastic OptimizationUnnamed ItemUnnamed ItemUnnamed ItemUnnamed ItemMinimizing robust estimates of sums of parameterized functionsSketched Newton--RaphsonAccelerating incremental gradient optimization with curvature informationOptimizing Adaptive Importance Sampling by Stochastic ApproximationImproving kernel online learning with a snapshot memoryCocoercivity, smoothness and bias in variance-reduced stochastic gradient methodsAcceleration on Adaptive Importance Sampling with Sample Average ApproximationConvergence rates of accelerated proximal gradient algorithms under independent noiseThe multiproximal linearization method for convex composite problemsOn Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate AnalysisLinear convergence of cyclic SAGAGeneralized forward-backward splitting with penalization for monotone inclusion problemsUnnamed ItemUnnamed ItemAn accelerated variance reducing stochastic method with Douglas-Rachford splittingInexact proximal stochastic gradient method for convex composite optimizationStochastic Reformulations of Linear Systems: Algorithms and Convergence TheoryIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine LearningBatched Stochastic Gradient Descent with Weighted SamplingA Continuous-Time Analysis of Distributed Stochastic GradientBi-fidelity stochastic gradient descent for structural optimization under uncertaintyMultilevel Stochastic Gradient Methods for Nested Composition OptimizationA linearly convergent stochastic recursive gradient method for convex optimizationStochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular MaximizationOn variance reduction for stochastic smooth convex optimization with multiplicative noiseA Smooth Inexact Penalty Reformulation of Convex Problems with Linear ConstraintsSurpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence RateOptimization Methods for Large-Scale Machine LearningLeveraged least trimmed absolute deviationsUnnamed ItemUnnamed ItemAn Optimal Algorithm for Decentralized Finite-Sum OptimizationCatalyst Acceleration for First-order Convex Optimization: from Theory to PracticeIQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateConvergence of stochastic proximal gradient algorithmPoint process estimation with Mirror Prox algorithmsGeneralized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimizationAnalysis of biased stochastic gradient descent using sequential semidefinite programsMomentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methodsConvergence rates for optimised adaptive importance samplersThe Averaged Kaczmarz Iteration for Solving Inverse ProblemsRandom Gradient Extrapolation for Distributed and Stochastic OptimizationStochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging ApplicationsEnsemble Kalman inversion: a derivative-free technique for machine learning tasksStochastic proximal quasi-Newton methods for non-convex composite optimizationRandomized smoothing variance reduction method for large-scale non-smooth convex optimizationStochastic quasi-gradient methods: variance reduction via Jacobian sketchingRelative utility bounds for empirically optimal portfoliosMultivariate goodness-of-fit tests based on Wasserstein distanceFast and safe: accelerated gradient methods with optimality certificates and underestimate sequencesUnnamed ItemA stochastic primal-dual method for optimization with conditional value at risk constraintsA stochastic trust region method for unconstrained optimization problemsProvable accelerated gradient method for nonconvex low rank optimizationOn the regularization effect of stochastic gradient descent applied to least-squaresStochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regressionAnalysis of stochastic gradient descent in continuous timeFast incremental expectation maximization for finite-sum optimization: nonasymptotic convergenceFully asynchronous policy evaluation in distributed reinforcement learning over networksThe recursive variational Gaussian approximation (R-VGA)An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationAdaptive Sampling for Incremental Optimization Using Stochastic Gradient DescentDeep relaxation: partial differential equations for optimizing deep neural networksProximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth ConditionsA randomized incremental primal-dual method for decentralized consensus optimizationA Stochastic Semismooth Newton Method for Nonsmooth Nonconvex OptimizationUnnamed ItemUnnamed ItemIncremental proximal gradient scheme with penalization for constrained composite convex optimization problemsVariable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimizationStochastic average gradient algorithm for multirate FIR models with varying time delays using self‐organizing mapsMulti-agent reinforcement learning: a selective overview of theories and algorithmsUnnamed ItemPDE-Constrained Optimal Control Problems with Uncertain Parameters using SAGAUnnamed ItemStochastic proximal linear method for structured non-convex problemsInexact SARAH algorithm for stochastic optimizationA hierarchically low-rank optimal transport dissimilarity measure for structured dataUnnamed ItemLinear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problemsA stochastic first-order trust-region method with inexact restoration for finite-sum minimizationA Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex OptimizationAccelerating variance-reduced stochastic gradient methodsA hybrid stochastic optimization framework for composite nonconvex optimization


Uses Software


Cites Work


This page was built for publication: Minimizing finite sums with the stochastic average gradient