Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
From MaRDI portal
Publication:2071934
DOI10.1016/j.automatica.2021.110095zbMath1482.93034arXiv2008.02856OpenAlexW4200078897MaRDI QIDQ2071934
Nirupam Gupta, Kushal Chakrabarti, Nikhil Chopra
Publication date: 31 January 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.02856
Related Items (2)
A control theoretic framework for adaptive gradient optimizers ⋮ Scalable distributed least square algorithms for large-scale linear equations via an optimization approach
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Application of machine learning techniques for supply chain demand forecasting
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- Linear Hypothesis Testing in Dense High-Dimensional Linear Models
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- Distributed Solution of Large-Scale Linear Systems via Accelerated Projection-Based Consensus
- Some methods of speeding up the convergence of iteration methods
This page was built for publication: Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem