Peter Richtárik

From MaRDI portal
Person:263210

Available identifiers

zbMath Open richtarik.peterDBLP62/8001WikidataQ26704526 ScholiaQ26704526MaRDI QIDQ263210

List of research outcomes





PublicationDate of PublicationType
Faster Rates for Compressed Federated Learning with Client-Variance Reduction2024-03-26Paper
Unified analysis of stochastic gradient methods for composite convex and smooth optimization2023-11-09Paper
Direct nonlinear acceleration2023-07-12Paper
2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression2023-05-21Paper
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model2023-05-21Paper
Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization2023-05-21Paper
Stochastic distributed learning with gradient quantization and double-variance reduction2023-03-15Paper
On the convergence analysis of asynchronous SGD for solving consistent linear systems2023-02-21Paper
Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity2023-01-24Paper
Quasi-Newton methods for machine learning: forget the past, just sample2022-12-20Paper
A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate2022-10-31Paper
Dualize, split, randomize: toward fast nonsmooth optimization algorithms2022-10-04Paper
Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit2022-09-23Paper
Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor2022-08-05Paper
RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates2022-07-26Paper
Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition2022-06-01Paper
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization2022-05-17Paper
Alternating maximization: unifying framework for 8 sparse PCA formulations and efficient parallel codes2022-04-22Paper
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols2022-02-17Paper
Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization2021-09-21Paper
Fastest rates for stochastic mirror descent methods2021-08-09Paper
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization2021-08-09Paper
https://portal.mardi4nfdi.de/entity/Q49990812021-07-09Paper
Stochastic quasi-gradient methods: variance reduction via Jacobian sketching2021-07-02Paper
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods2021-05-03Paper
Convergence Analysis of Inexact Randomized Iterative Methods2021-03-29Paper
An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints2021-02-22Paper
Stochastic Three Points Method for Unconstrained Smooth Minimization2020-10-08Paper
Error Compensated Distributed SGD Can Be Accelerated2020-09-30Paper
A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments2020-08-03Paper
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory2020-05-28Paper
Fast Linear Convergence of Randomized BFGS2020-02-26Paper
New Convergence Aspects of Stochastic Gradient Algorithms2020-02-07Paper
Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates2019-11-08Paper
Smooth minimization of nonsmooth functions with parallel coordinate descent methods2019-09-09Paper
MISO is Making a Comeback With Better Proofs and Rates2019-06-04Paper
L-SVRG and L-Katyusha with Arbitrary Sampling2019-06-04Paper
Stochastic Sign Descent Methods: New Algorithms and Better Theory2019-05-30Paper
A Stochastic Derivative Free Optimization Method with Momentum2019-05-30Paper
A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions2019-05-27Paper
RSN: Randomized Subspace Newton2019-05-26Paper
Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit2019-05-25Paper
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction2019-04-10Paper
Coordinate Descent Face-Off: Primal or Dual?2019-02-06Paper
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control2019-02-04Paper
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample2019-01-28Paper
https://portal.mardi4nfdi.de/entity/Q45581692018-11-21Paper
A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints2018-10-31Paper
Parallel Stochastic Newton Method2018-10-22Paper
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications2018-10-11Paper
Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches2018-09-25Paper
Nonconvex Variance Reduced Optimization with Arbitrary Sampling2018-09-11Paper
The complexity of primal-dual fixed point methods for ridge regression2018-08-29Paper
Improving SAGA via a Probabilistic Interpolation with Gradient Descent2018-06-14Paper
Matrix completion under interval uncertainty2018-05-24Paper
On the complexity of parallel coordinate descent2018-05-02Paper
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization2018-02-12Paper
Randomized Block Cubic Newton Method2018-02-12Paper
Stochastic Spectral and Conjugate Descent Methods2018-02-11Paper
Randomized projection methods for convex feasibility problems: conditioning and convergence rates2018-01-15Paper
Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms2017-12-20Paper
Distributed optimization with arbitrary local solvers2017-11-24Paper
Semi-stochastic coordinate descent2017-11-24Paper
Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-{\L}ojasiewicz Functions2017-09-09Paper
Privacy Preserving Randomized Gossip Algorithms2017-06-23Paper
Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse2016-12-19Paper
Coordinate descent with arbitrary sampling II: expected separable overapproximation2016-11-08Paper
Coordinate descent with arbitrary sampling I: algorithms and complexity2016-11-08Paper
Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent2016-11-07Paper
On optimal probabilities in stochastic coordinate descent methods2016-09-21Paper
Inexact coordinate descent: complexity and preconditioning2016-08-31Paper
Distributed coordinate descent method for learning with big data2016-06-06Paper
Parallel coordinate descent methods for big data optimization2016-04-04Paper
Stochastic Block BFGS: Squeezing More Curvature out of Data2016-03-31Paper
Importance Sampling for Minibatches2016-02-06Paper
Distributed Block Coordinate Descent for Minimizing Partially Separable Functions2016-01-05Paper
Stochastic Dual Ascent for Solving Linear Systems2015-12-21Paper
Randomized Iterative Methods for Linear Systems2015-12-09Paper
Accelerated, Parallel, and Proximal Coordinate Descent2015-11-04Paper
Separable approximations and decomposition methods for the augmented Lagrangian2015-09-04Paper
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design2015-03-03Paper
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function2014-06-02Paper
Generalized power method for sparse principal component analysis2012-07-13Paper
Approximate level method for nonsmooth convex minimization2012-05-08Paper
Improved Algorithms for Convex Minimization in Relative Scale2012-01-09Paper
Variance Reduced Distributed Non-Convex Optimization Using Matrix StepsizesN/APaper
Consensus-Based Optimization with Truncated NoiseN/APaper

Research outcomes over time

This page was built for person: Peter Richtárik