Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
DOI10.1137/19M129526XzbMATH Open1461.65067arXiv1706.02205OpenAlexW3156927273MaRDI QIDQ92247FDOQ92247
Authors: Florian Schäfer, T. J. Sullivan, Houman Owhadi, Florian Schäfer, T. J. Sullivan, Houman Owhadi
Publication date: 7 June 2017
Published in: Multiscale Modeling & Simulation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1706.02205
Recommendations
- On the Nyström method for approximating a gram matrix for improved kernel-based learning
- Learning Theory
- Low-Rank Eigenvector Compression of Posterior Covariance Matrices for Linear Gaussian Inverse Problems
- An \(O(N \log N)\) hierarchical random compression method for kernel matrices by sampling partial matrix entries
- Approximation of high-dimensional kernel matrices by multilevel circulant matrices
principal component analysisCholesky factorizationcovariance functiongamblet transformkernel matrixsparsity
Factor analysis and principal components; correspondence analysis (62H25) Nontrigonometric harmonic analysis involving wavelets and other special systems (42C40) Computational methods for sparse matrices (65F50) Martingales with discrete parameter (60G42) Numerical linear algebra (65F99) Multigrid methods; domain decomposition for boundary value problems involving PDEs (65N55) Probabilistic methods, particle methods, etc. for boundary value problems involving PDEs (65N75) Analysis of algorithms and problem complexity (68Q25) Analysis of algorithms (68W40)
Cites Work
- Efficient numerical methods for non-local operators. \(\mathcal H^2\)-matrix compression, algorithms and analysis.
- Stochastic Models That Separate Fractal Dimension and the Hurst Effect
- Title not available (Why is that?)
- Studies in the history of probability and statistics XLIX: On the Matérn correlation family
- 10.1162/15324430260185619
- Gaussian processes for machine learning.
- Gaussian Predictive Process Models for Large Spatial Data Sets
- Title not available (Why is that?)
- Title not available (Why is that?)
- ON STATIONARY PROCESSES IN THE PLANE
- An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach
- The analysis of a nested dissection algorithm
- LU factorization of non-standard forms and direct multiresolution solvers
- A sparse matrix arithmetic based on \({\mathfrak H}\)-matrices. I: Introduction to \({\mathfrak H}\)-matrices
- Block red-black ordering: A new ordering strategy for parallelization of ICCG method
- Existence of \(\mathcal H\)-matrix approximants to the inverse FE-matrix of elliptic operators with \(L^\infty\)-coefficients
- Adaptive low-rank approximation of collocation matrices
- The Schur complement and its applications
- Mitigating the influence of the boundary on PDE-based covariance operators
- Data-sparse approximation by adaptive \({\mathcal H}^2\)-matrices
- Constructing continuous stationary covariances as limits of the second-order stochastic difference equations
- A sparse \({\mathcal H}\)-matrix arithmetic. II: Application to multi-dimensional problems
- Randomized matrix-free trace and log-determinant estimators
- An \(\mathcal O(N\log N)\) fast direct solver for partial hierarchically semi-separable matrices. With application to radial basis function interpolation
- Hierarchical interpolative factorization for elliptic operators: integral equations
- Graph colouring algorithms
- Polyharmonic homogenization, rough polyharmonic splines and sparse super-localization
- Decay bounds for functions of Hermitian matrices with banded or Kronecker structure
- Multigrid with Rough Coefficients and Multiresolution Operator Decomposition from Hierarchical Information Games
- Localization in matrix computations: theory and applications
- SelInv---An Algorithm for Selected Inversion of a Sparse Symmetric Matrix
- 10.1162/153244303768966085
- Fast algorithms for hierarchically semiseparable matrices
- A unifying view of sparse approximate Gaussian process regression
- An analysis of a class of variational multiscale methods based on subspace decomposition
- Localization of elliptic multiscale problems
- Fast wavelet transforms and numerical algorithms I
- Decay Rates for Inverses of Band Matrices
- Title not available (Why is that?)
- The Evolution of the Minimum Degree Ordering Algorithm
- Generalized Nested Dissection
- On Finding Supernodes for Sparse Matrix Computations
- An Iterative Solution Method for Linear Systems of Which the Coefficient Matrix is a Symmetric M-Matrix
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- An Efficient Block-Oriented Approach to Parallel Sparse Cholesky Factorization
- 2010 Rietz lecture: When does the screening effect hold?
- Orderings for Factorized Sparse Approximate Inverse Preconditioners
- A Full Scale Approximation of Covariance Functions for Large Spatial Data Sets
- The Bramble--Hilbert Lemma for Convex Domains
- Numerical homogenization of heterogeneous fractional Laplacians
- Whittle-Matérn priors for Bayesian statistical inversion with applications in electrical impedance tomography
- New efficient and robust HSS Cholesky factorization of SPD matrices
- Operator-adapted wavelets, fast solvers, and numerical homogenization. From a game theoretic approach to numerical approximation and algorithm design
- A class of multi-resolution approximations for large spatial datasets
- Sparse Compression of Expected Solution Operators
- A Fast $ULV$ Decomposition Solver for Hierarchically Semiseparable Representations
- Correlation priors
- Nested Dissection of a Regular Finite Element Mesh
- Compressing Rank-Structured Matrices via Randomized Sampling
- A fast algorithm for particle simulations
- Sparse operator compression of higher-order elliptic operators with rough coefficients
- Propriétés des matrices ``bien localisées près de leur diagonale et quelques applications. (Properties of matrices ``well localized near the diagonal and some applications)
- Localization of matrix factorizations
- Hierarchical matrices. A means to efficiently solve elliptic boundary value problems
- Kernel methods in machine learning
- Approximation of solution operators of elliptic partial differential equations by \({\mathcal H}\)- and \({\mathcal H}^2\)-matrices
Cited In (34)
- Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients
- Multi-scale Vecchia approximations of Gaussian processes
- Sum of Kronecker products representation and its Cholesky factorization for spatial covariance matrices from large grids
- GParareal: a time-parallel ODE solver using Gaussian process emulation
- Scalable Bayesian Transport Maps for High-Dimensional Non-Gaussian Spatial Fields
- A modern retrospective on probabilistic numerics
- Scaled Vecchia approximation for fast computer-model emulation
- Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
- An adaptive factorized Nyström preconditioner for regularized kernel matrices
- Hierarchical sparse Cholesky decomposition with applications to high-dimensional spatio-temporal filtering
- Sparse Recovery of Elliptic Solvers from Matrix-Vector Products
- A fast hierarchically preconditioned eigensolver based on multiresolution matrix decomposition
- Fast eigenpairs computation with operator adapted wavelets and hierarchical subspace correction
- Bayesian numerical methods for nonlinear partial differential equations
- Approximation of high-dimensional kernel matrices by multilevel circulant matrices
- Continuous relaxations for the traveling salesman problem
- Gaussian process hydrodynamics
- Coarsening in algebraic multigrid using Gaussian processes
- Fast macroscopic forcing method
- Analytical low-rank compression via proxy point selection
- Sparse Cholesky Factorization by Kullback--Leibler Minimization
- Kernel methods are competitive for operator learning
- Error analysis of kernel/GP methods for nonlinear and parametric PDEs
- Symmetry exploits for Bayesian cubature methods
- Bayesian nonparametric generative modeling of large multivariate non-Gaussian spatial fields
- A Bayesian conjugate gradient method (with discussion)
- Numerical homogenization beyond scale separation
- Multiresolution operator decomposition for flow simulation in fractured porous media
- De-noising by thresholding operator adapted wavelets
- Learning elliptic partial differential equations with randomized linear algebra
- Samplets: construction and scattered data compression
- Solving and learning nonlinear PDEs with Gaussian processes
- GPvecchia
- Bayesian nonstationary and nonparametric covariance estimation for large spatial data (with discussion)
Uses Software
Summary: This article provides an overview of numerical methods for approximating and solving problems related to Gaussian processes, kernel matrices, and elliptic PDEs. It discusses various approaches for efficiency and accuracy, including sparse linear solvers (e.g., \multigrid, sparse Cholesky factorization) and low-rank approximations (Nyström, rank-revealing Cholesky factorization). Hierarchical methods (hierarchical matrices, HODLR, HSS matrices) and wavelet-based techniques are also highlighted. A notable contribution mentioned involves a modified Cholesky factorization for kernel matrices from elliptic PDE Green's functions, aiming for scalability and accuracy. Overall, the article showcases a range of solutions for computational challenges in these domains.
Summary_simple: Scientists use various methods to solve complex math problems related to Gaussian processes and partial differential equations (PDEs). These problems often involve large amounts of data, making them hard to compute. To tackle this, researchers employ techniques like sparse linear solvers, low-rank approximations, hierarchical methods, and wavelet-based approaches. A new method, modifying an existing math tool called Cholesky factorization, shows promise in solving certain PDEs efficiently without needing overly complex structures. This variety of approaches helps scientists find the best way to solve their specific computational challenges.
This page was built for publication: Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q92247)