Scaling positive random matrices: concentration and asymptotic convergence
From MaRDI portal
Publication:2679643
DOI10.1214/22-ECP502zbMATH Open1506.60014arXiv2012.06393OpenAlexW3111170930MaRDI QIDQ2679643FDOQ2679643
Publication date: 23 January 2023
Published in: Electronic Communications in Probability (Search for Journal in Brave)
Abstract: It is well known that any positive matrix can be scaled to have prescribed row and column sums by multiplying its rows and columns by certain positive scaling factors (which are unique up to a positive scalar). This procedure is known as matrix scaling, and has found numerous applications in operations research, economics, image processing, and machine learning. In this work, we investigate the behavior of the scaling factors and the resulting scaled matrix when the matrix to be scaled is random. Specifically, letting be a positive and bounded random matrix whose entries assume a certain type of independence, we provide a concentration inequality for the scaling factors of around those of . This result is employed to bound the convergence rate of the scaling factors of to those of , as well as the concentration of the scaled version of around the scaled version of in operator norm, as . When the entries of are independent, , and all prescribed row and column sums are (i.e., doubly-stochastic matrix scaling), both of the previously-mentioned bounds are with high probability. We demonstrate our results in several simulations.
Full work available at URL: https://arxiv.org/abs/2012.06393
Recommendations
- Matrix scaling limits in finitely many iterations
- Scaling of symmetric matrices by positive diagonal congruence
- Scaling Matrices to Prescribed Row and Column Maxima
- A Symmetry Preserving Algorithm for Matrix Scaling
- On the rate of convergence of deterministic and randomized RAS matrix scaling algorithms
Cites Work
- Concerning nonnegative matrices and doubly stochastic matrices
- A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices
- High-Dimensional Probability
- Symmetrizing smoothing filters
- Scaling of matrices to achieve specified row and column sums
- The diagonal equivalence of a nonnegative matrix to a stochastic matrix
- A Comparative Study of Algorithms for Matrix Balancing
- The Sinkhorn–Knopp Algorithm: Convergence and Applications
- On information plus noise kernel random matrices
- The collected works of Wassily Hoeffding. Ed. by N. I. Fisher and P. K. Sen
- The DAD Theorem for Arbitrary Row Sums
- Diagonal Equivalence to Matrices with Prescribed Row and Column Sums
- Convergence of Entropic Schemes for Optimal Transport and Gradient Flows
- Spectral Convergence of Diffusion Maps: Improved Error Bounds and an Alternative Normalization
- Doubly Stochastic Normalization of the Gaussian Kernel Is Robust to Heteroskedastic Noise
- Manifold learning with bi-stochastic kernels
- Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications
- Empirical Regularized Optimal Transport: Statistical Theory and Applications
- Stability of entropic optimal transport and Schrödinger bridges
- Asymptotics for Semidiscrete Entropic Optimal Transport
Cited In (1)
This page was built for publication: Scaling positive random matrices: concentration and asymptotic convergence
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2679643)