The following pages link to (Q5053196):
Displaying 6 items.
- Adaptive step size rules for stochastic optimization in large-scale learning (Q6116586) (← links)
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality (Q6161313) (← links)
- Random-reshuffled SARAH does not need full gradient computations (Q6204201) (← links)
- Global stability of first-order methods for coercive tame functions (Q6608042) (← links)
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms (Q6659241) (← links)
- An algorithm for learning representations of models with scarce data (Q6660916) (← links)