Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
From MaRDI portal
Publication:4598334
Abstract: We develop and analyze a broad family of stochastic/randomized algorithms for inverting a matrix. We also develop specialized variants maintaining symmetry or positive definiteness of the iterates. All methods in the family converge globally and linearly (i.e., the error decays exponentially), with explicit rates. In special cases, we obtain stochastic block variants of several quasi-Newton updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). Ours are the first stochastic versions of these updates shown to converge to an inverse of a fixed matrix. Through a dual viewpoint we uncover a fundamental link between quasi-Newton updates and approximate inverse preconditioning. Further, we develop an adaptive variant of randomized block BFGS, where we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. By inverting several matrices from varied applications, we demonstrate that AdaRBFGS is highly competitive when compared to the well established Newton-Schulz and minimal residual methods. In particular, on large-scale problems our method outperforms the standard methods by orders of magnitude. Development of efficient methods for estimating the inverse of very large matrices is a much needed tool for preconditioning and variable metric optimization methods in the advent of the big data era.
Recommendations
- Global convergence of online limited memory BFGS
- A stochastic quasi-Newton method for large-scale optimization
- Different stochastic algorithms to obtain matrix inversion
- Matrix algebras in quasi-Newton methods for unconstrained minimization
- Quasi-Newton methods for machine learning: forget the past, just sample
Cites work
- scientific article; zbMATH DE number 1953444 (Why is no real title available?)
- A Class of Methods for Solving Nonlinear Simultaneous Equations
- A Family of Variable-Metric Methods Derived by Variational Means
- A New Method for Obtaining the Inverse Matrix
- A Rapidly Convergent Descent Method for Minimization
- A comparative study of sparse approximate inverse preconditioners
- A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix
- A randomized Kaczmarz algorithm with exponential convergence
- Approximate Inverse Preconditioners via Sparse-Sparse Iterations
- Broyden updating, the good and the bad!
- Conditioning of Quasi-Newton Methods for Function Minimization
- Factorized Sparse Approximate Inverse Preconditionings I. Theory
- Frobenius norm minimization and probing for preconditioning
- Iterative Hessian sketch: fast and accurate solution approximation for constrained least-squares
- Modification Methods for Inverting Matrices and Solving Systems of Linear Algebraic Equations
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On A Class of Limited Memory Preconditioners For Large Scale Linear Systems With Multiple Right-Hand Sides
- Positive definite matrices
- Probabilistic interpretation of linear solvers
- Randomized Hessian estimation and directional search
- Randomized Sketches of Convex Programs With Sharp Guarantees
- Randomized iterative methods for linear systems
- Randomized methods for linear constraints: convergence rates and conditioning
- Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
- Sparse Approximate-Inverse Preconditioners Using Norm-Minimization Techniques
- Stability of Methods for Matrix Inversion
- The University of Florida sparse matrix collection
- Updating Quasi-Newton Matrices with Limited Storage
- Variable Metric Method for Minimization
- Variable metric random pursuit
- Variations on Variable-Metric Methods
Cited in
(16)- slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks
- Towards explicit superlinear convergence rate for SR1
- Sampled limited memory methods for massive linear inverse problems
- On Adaptive Sketch-and-Project for Solving Linear Systems
- Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
- Stochastic reformulations of linear systems: algorithms and convergence theory
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Greedy quasi-Newton methods with explicit superlinear convergence
- Rates of superlinear convergence for classical quasi-Newton methods
- Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Unifying relations between iterative linear equation solvers and explicit Euler approximations for associated parabolic regularized equations
- Sharp Analysis of Sketch-and-Project Methods via a Connection to Randomized Singular Value Decomposition
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- Beyond the EM algorithm: constrained optimization methods for latent class model
- Batched Stochastic Gradient Descent with Weighted Sampling
This page was built for publication: Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4598334)