Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
From MaRDI portal
Publication:2071934
DOI10.1016/J.AUTOMATICA.2021.110095zbMATH Open1482.93034arXiv2008.02856OpenAlexW4200078897MaRDI QIDQ2071934FDOQ2071934
Authors: Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra
Publication date: 31 January 2022
Published in: Automatica (Search for Journal in Brave)
Abstract: This paper considers the multi-agent linear least-squares problem in a server-agent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the server-agent variant of the traditional iterative gradient-descent method. The gradient-descent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are ill-conditioned, the gradient-descent method may require a large number of iterations to converge. We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method. We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradient-descent method and the state-of-the-art accelerated gradient-descent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different real-world least-squares problems in both noise-free and noisy computation environment.
Full work available at URL: https://arxiv.org/abs/2008.02856
Recommendations
- A Distributed Normalized Explicit Preconditioned Conjugate Gradient Method
- A preconditioner for least-squares distributed parameter estimation
- On the convergence of the conditional gradient method in distributed optimization problems
- Distributed iteratively reweighted least squares and applications
- A preconditioned conjugate gradient method on a distributed memory multiprocessor
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- Exact spectral-like gradient method for distributed optimization
- Fast Distributed Gradient Methods
- Scalable distributed least square algorithms for large-scale linear equations via an optimization approach
- Gradient-free distributed optimization with exact convergence
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Some methods of speeding up the convergence of iteration methods
- Application of machine learning techniques for supply chain demand forecasting
- Linear Hypothesis Testing in Dense High-Dimensional Linear Models
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- Distributed Solution of Large-Scale Linear Systems via Accelerated Projection-Based Consensus
Cited In (4)
- A control theoretic framework for adaptive gradient optimizers
- A Distributed Normalized Explicit Preconditioned Conjugate Gradient Method
- Scalable distributed least square algorithms for large-scale linear equations via an optimization approach
- Reduced-order identification methods: hierarchical algorithm or variable elimination algorithm
Uses Software
This page was built for publication: Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2071934)