The following pages link to Konstantin Mishchenko (Q2082231):
Displayed 10 items.
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms (Q2082232) (← links)
- A Distributed Flexible Delay-Tolerant Proximal Gradient Algorithm (Q5220423) (← links)
- Stochastic distributed learning with gradient quantization and double-variance reduction (Q5882226) (← links)
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence (Q6116237) (← links)
- Super-Universal Regularized Newton Method (Q6136654) (← links)
- A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints (Q6309043) (← links)
- Stochastic Distributed Learning with Gradient Quantization and Variance Reduction (Q6317002) (← links)
- A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions (Q6319479) (← links)
- MISO is Making a Comeback With Better Proofs and Rates (Q6319945) (← links)
- On Seven Fundamental Optimization Challenges in Machine Learning (Q6381109) (← links)