Pages that link to "Item:Q2227529"
From MaRDI portal
The following pages link to Why random reshuffling beats stochastic gradient descent (Q2227529):
Displaying 14 items.
- Incremental without replacement sampling in nonconvex optimization (Q2046568) (← links)
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods (Q4641660) (← links)
- (Q5053196) (← links)
- On the Efficiency of Random Permutation for ADMM and Coordinate Descent (Q5108265) (← links)
- Two Symmetrized Coordinate Descent Methods Can Be $O(n^2)$ Times Slower Than the Randomized Version (Q5162659) (← links)
- Convergence Rate of Incremental Gradient and Incremental Newton Methods (Q5237308) (← links)
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms (Q5266533) (← links)
- Simple and fast algorithm for binary integer and online linear programming (Q6160284) (← links)
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality (Q6161313) (← links)
- SPIRAL: a superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimization (Q6498409) (← links)
- High-dimensional limit of one-pass SGD on least squares (Q6597217) (← links)
- Global stability of first-order methods for coercive tame functions (Q6608042) (← links)
- A selective review on statistical methods for massive data computation: distributed computing, subsampling, and minibatch techniques (Q6620576) (← links)
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms (Q6659241) (← links)