An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
From MaRDI portal
Publication:4389203
DOI10.1137/S1052623495294797zbMath0922.90131OpenAlexW2061863621MaRDI QIDQ4389203
Publication date: 12 May 1998
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/s1052623495294797
convergence analysisunconstrained optimizationneural networksbackpropagationgradient projectiongradient algorithmincremental gradient methodfinite sum of functionsnonlinear neural network training
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items
A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, Descent methods with linesearch in the presence of perturbations, Modified spectral projected subgradient method: convergence analysis and momentum parameter heuristics, Parallel stochastic gradient algorithms for large-scale matrix completion, Convergence analysis of perturbed feasible descent methods, Spectral projected subgradient with a momentum term for the Lagrangean dual approach, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation, A framework for parallel second order incremental optimization algorithms for solving partially separable problems, An incremental primal-dual method for nonlinear programming with special structure, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, A stochastic variational framework for fitting and diagnosing generalized linear mixed models, Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations, Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality, Random algorithms for convex minimization problems, Incremental proximal methods for large scale convex optimization, Recent Theoretical Advances in Non-Convex Optimization, String-averaging incremental stochastic subgradient algorithms, A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Minimizing finite sums with the stochastic average gradient, An incremental subgradient method on Riemannian manifolds, Incrementally updated gradient methods for constrained and regularized optimization, Convergence property of gradient-type methods with non-monotone line search in the presence of perturbations, Robust inversion, dimensionality reduction, and randomized sampling, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications, Accelerating deep neural network training with inconsistent stochastic gradient descent, A merit function approach to the subgradient method with averaging, On the linear convergence of the stochastic gradient method with constant step-size, Error stability properties of generalized gradient-type algorithms, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Convergence Rate of Incremental Gradient and Incremental Newton Methods, On perturbed steepest descent methods with inexact line search for bilevel convex optimization, Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems, A globally convergent incremental Newton method, On the Convergence Rate of Incremental Aggregated Gradient Algorithms, Incremental accelerated gradient methods for SVM classification: study of the constrained approach, Global convergence of the Dai-Yuan conjugate gradient method with perturbations