The following pages link to Martin Takáč (Q263211):
Displaying 36 items.
- Parallel coordinate descent methods for big data optimization (Q263212) (← links)
- On optimal probabilities in stochastic coordinate descent methods (Q315487) (← links)
- Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption (Q1695084) (← links)
- Matrix completion under interval uncertainty (Q1752160) (← links)
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences (Q2044479) (← links)
- Alternating maximization: unifying framework for 8 sparse PCA formulations and efficient parallel codes (Q2129204) (← links)
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function (Q2452370) (← links)
- Distributed Coordinate Descent Method for Learning with Big Data (Q2810888) (← links)
- (Q2896264) (← links)
- (Q2953662) (← links)
- Distributed Block Coordinate Descent for Minimizing Partially Separable Functions (Q3462314) (← links)
- (Q4366596) (← links)
- (Q4558572) (← links)
- Distributed optimization with arbitrary local solvers (Q4594835) (← links)
- A low-rank coordinate-descent algorithm for semidefinite programming relaxations of optimal power flow (Q4594836) (← links)
- On the complexity of parallel coordinate descent (Q4638927) (← links)
- (Q4969198) (← links)
- A robust multi-batch L-BFGS method for machine learning (Q4972551) (← links)
- Quasi-Newton methods for machine learning: forget the past, just sample (Q5058389) (← links)
- Randomized sketch descent methods for non-separable linearly constrained optimization (Q5077024) (← links)
- Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory (Q5112239) (← links)
- Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design (Q5176277) (← links)
- New Convergence Aspects of Stochastic Gradient Algorithms (Q5214284) (← links)
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning (Q5859008) (← links)
- Inexact SARAH algorithm for stochastic optimization (Q5859016) (← links)
- Preconditioning meets biased compression for efficient distributed optimization (Q6149587) (← links)
- Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes (Q6170035) (← links)
- Random-reshuffled SARAH does not need full gradient computations (Q6204201) (← links)
- Entropy Penalized Semidefinite Programming (Q6297655) (← links)
- Exploiting higher-order derivatives in convex optimization methods (Q6506474) (← links)
- Stochastic gradient methods with preconditioned updates (Q6536836) (← links)
- Inexact tensor methods and their application to stochastic convex optimization (Q6585820) (← links)
- Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations (Q6730032) (← links)
- Newton Method Revisited: Global Convergence Rates up to $\mathcal {O}\left(k^{-3} \right)$ for Stepsize Schedules and Linesearch Procedures (Q6730563) (← links)
- OPTAMI: Global Superlinear Convergence of High-order Methods (Q6747450) (← links)
- Linear Convergence Rate in Convex Setup is Possible! Gradient Descent Method Variants under $(L_0,L_1)$-Smoothness (Q6759390) (← links)