Pages that link to "Item:Q4389203"
From MaRDI portal
The following pages link to An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule (Q4389203):
Displaying 38 items.
- Global convergence of the Dai-Yuan conjugate gradient method with perturbations (Q263134) (← links)
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks (Q301668) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization (Q607498) (← links)
- Random algorithms for convex minimization problems (Q644912) (← links)
- Incremental proximal methods for large scale convex optimization (Q644913) (← links)
- Robust inversion, dimensionality reduction, and randomized sampling (Q715245) (← links)
- Spectral projected subgradient with a momentum term for the Lagrangean dual approach (Q878598) (← links)
- A stochastic variational framework for fitting and diagnosing generalized linear mixed models (Q899068) (← links)
- Error stability properties of generalized gradient-type algorithms (Q1273917) (← links)
- Descent methods with linesearch in the presence of perturbations (Q1360171) (← links)
- Convergence analysis of perturbed feasible descent methods (Q1379956) (← links)
- Modified spectral projected subgradient method: convergence analysis and momentum parameter heuristics (Q1653958) (← links)
- An incremental subgradient method on Riemannian manifolds (Q1752648) (← links)
- An incremental primal-dual method for nonlinear programming with special structure (Q1936792) (← links)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (Q2023684) (← links)
- Incrementally updated gradient methods for constrained and regularized optimization (Q2251572) (← links)
- Accelerating deep neural network training with inconsistent stochastic gradient descent (Q2292210) (← links)
- On the linear convergence of the stochastic gradient method with constant step-size (Q2311205) (← links)
- A globally convergent incremental Newton method (Q2349125) (← links)
- Incremental accelerated gradient methods for SVM classification: study of the constrained approach (Q2355191) (← links)
- Parallel stochastic gradient algorithms for large-scale matrix completion (Q2392935) (← links)
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems (Q2419531) (← links)
- Convergence property of gradient-type methods with non-monotone line search in the presence of perturbations (Q2489332) (← links)
- On perturbed steepest descent methods with inexact line search for bilevel convex optimization (Q3112499) (← links)
- String-averaging incremental stochastic subgradient algorithms (Q4631774) (← links)
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization (Q4636997) (← links)
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate (Q4641666) (← links)
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications (Q4687241) (← links)
- A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints (Q5152474) (← links)
- Convergence Rate of Incremental Gradient and Incremental Newton Methods (Q5237308) (← links)
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms (Q5266533) (← links)
- A merit function approach to the subgradient method with averaging (Q5459823) (← links)
- Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems (Q5863523) (← links)
- Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation (Q6054924) (← links)
- A distributed proximal gradient method with time-varying delays for solving additive convex optimizations (Q6110428) (← links)
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations (Q6140717) (← links)
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality (Q6161313) (← links)