ASKIT: an efficient, parallel library for high-dimensional kernel summations
From MaRDI portal
Publication:2830640
Recommendations
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
- Far-field compression for fast kernel summation methods in high dimensions
- A fast summation tree code for Matérn kernel
- Learning in high-dimensional feature spaces using ANOVA-based fast matrix-vector multiplication
- On the Nyström method for approximating a gram matrix for improved kernel-based learning
Cites work
- scientific article; zbMATH DE number 2038320 (Why is no real title available?)
- scientific article; zbMATH DE number 4001209 (Why is no real title available?)
- scientific article; zbMATH DE number 5485566 (Why is no real title available?)
- A distributed kernel summation framework for general‐dimension machine learning
- A fast algorithm for particle simulations
- A fast summation tree code for Matérn kernel
- A kernel-independent adaptive fast multipole algorithm in two and three dimensions
- A randomized algorithm for the decomposition of matrices
- A randomized approximate nearest neighbors algorithm
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
- Adaptive Sampling and Fast Low-Rank Matrix Approximation
- CUR matrix decompositions for improved data analysis
- Efficient Algorithms for Computing a Strong Rank-Revealing QR Factorization
- Far-field compression for fast kernel summation methods in high dimensions
- Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
- Fast algorithms for classical physics
- Fast approximation of the discrete Gauss transform in higher dimensions
- Fast monte-carlo algorithms for finding low-rank approximations
- Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- Gaussian processes for machine learning.
- Kernel methods in machine learning
- Learning the kernel matrix with semidefinite programming
- On the Compression of Low Rank Matrices
- Randomized Algorithms for Matrices and Data
- The Fast Gauss Transform
- The fast generalized Gauss transform
- Variable kernel density estimation
Cited in
(6)- Far-field compression for fast kernel summation methods in high dimensions
- Hierarchically compositional kernels for scalable nonparametric learning
- ASKIT
- Algorithmic patterns for \(\mathcal {H}\)-matrices on many-core processors
- Fast approximation of the Gauss-Newton Hessian matrix for the multilayer perceptron
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
This page was built for publication: ASKIT: an efficient, parallel library for high-dimensional kernel summations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2830640)