Publication | Date of Publication | Type |
---|
Random-reshuffled SARAH does not need full gradient computations | 2024-03-27 | Paper |
Preconditioning meets biased compression for efficient distributed optimization | 2024-02-06 | Paper |
Stochastic Gradient Descent with Preconditioned Polyak Step-size | 2023-10-03 | Paper |
Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness | 2023-09-04 | Paper |
Convergence analysis of stochastic gradient descent with adaptive preconditioning for non-convex and convex functions | 2023-08-27 | Paper |
Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes | 2023-07-12 | Paper |
Hybrid Methods in Polynomial Optimisation | 2023-05-25 | Paper |
Cubic Regularization is the Key! The First Accelerated Quasi-Newton Method with a Global Convergence Rate of $O(k^{-2})$ for Convex Functions | 2023-02-09 | Paper |
Quasi-Newton methods for machine learning: forget the past, just sample | 2022-12-20 | Paper |
A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate | 2022-10-31 | Paper |
Effects of momentum scaling for SGD | 2022-10-21 | Paper |
FLECS: A Federated Learning Second-Order Framework via Compression and Sketching | 2022-06-04 | Paper |
Randomized sketch descent methods for non-separable linearly constrained optimization | 2022-05-17 | Paper |
Alternating maximization: unifying framework for 8 sparse PCA formulations and efficient parallel codes | 2022-04-22 | Paper |
The Power of First-Order Smooth Optimization for Black-Box Non-Smooth Problems | 2022-01-28 | Paper |
Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences | 2021-08-09 | Paper |
An accelerated communication-efficient primal-dual optimization framework for structured machine learning | 2021-04-15 | Paper |
Inexact SARAH algorithm for stochastic optimization | 2021-04-15 | Paper |
Inexact Tensor Methods and Their Application to Stochastic Convex Optimization | 2020-12-31 | Paper |
https://portal.mardi4nfdi.de/entity/Q4969198 | 2020-10-05 | Paper |
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory | 2020-05-28 | Paper |
New Convergence Aspects of Stochastic Gradient Algorithms | 2020-02-07 | Paper |
A robust multi-batch L-BFGS method for machine learning | 2019-11-25 | Paper |
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample | 2019-01-28 | Paper |
https://portal.mardi4nfdi.de/entity/Q4558572 | 2018-11-22 | Paper |
Matrix completion under interval uncertainty | 2018-05-24 | Paper |
On the complexity of parallel coordinate descent | 2018-05-02 | Paper |
Entropy Penalized Semidefinite Programming | 2018-02-12 | Paper |
Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption | 2018-02-06 | Paper |
Dual Free Adaptive Minibatch SDCA for Empirical Risk Minimization | 2018-01-25 | Paper |
Distributed optimization with arbitrary local solvers | 2017-11-24 | Paper |
A low-rank coordinate-descent algorithm for semidefinite programming relaxations of optimal power flow | 2017-11-24 | Paper |
A Coordinate-Descent Algorithm for Tracking Solutions in Time-Varying Optimal Power Flows | 2017-10-19 | Paper |
https://portal.mardi4nfdi.de/entity/Q2953662 | 2017-01-05 | Paper |
On optimal probabilities in stochastic coordinate descent methods | 2016-09-21 | Paper |
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning | 2016-06-15 | Paper |
Distributed Coordinate Descent Method for Learning with Big Data | 2016-06-06 | Paper |
Parallel coordinate descent methods for big data optimization | 2016-04-04 | Paper |
Distributed Block Coordinate Descent for Minimizing Partially Separable Functions | 2016-01-05 | Paper |
Hybrid Methods in Solving Alternating-Current Optimal Power Flows | 2015-10-07 | Paper |
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design | 2015-03-03 | Paper |
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function | 2014-06-02 | Paper |
https://portal.mardi4nfdi.de/entity/Q2896264 | 2012-07-16 | Paper |
https://portal.mardi4nfdi.de/entity/Q4366596 | 1998-04-03 | Paper |
Exploiting higher-order derivatives in convex optimization methods | 0001-01-03 | Paper |