ASKIT: an efficient, parallel library for high-dimensional kernel summations
DOI10.1137/15M1026468zbMATH Open1416.65578MaRDI QIDQ2830640FDOQ2830640
Authors: William B. March, Bo Xiao, Chenhan D. Yu, George Biros
Publication date: 28 October 2016
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Recommendations
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
- Far-field compression for fast kernel summation methods in high dimensions
- A fast summation tree code for Matérn kernel
- Learning in high-dimensional feature spaces using ANOVA-based fast matrix-vector multiplication
- On the Nyström method for approximating a gram matrix for improved kernel-based learning
machine learninglinear algebrakernel machinestreecodes\(N\)-body methodsrandomized matrix approximation
Learning and adaptive systems in artificial intelligence (68T05) Parallel numerical computation (65Y05) Packaged methods for numerical algorithms (65Y15) Approximation algorithms (68W25) Parallel algorithms in computer science (68W10)
Cites Work
- Gaussian processes for machine learning.
- Title not available (Why is that?)
- Variable kernel density estimation
- A fast algorithm for particle simulations
- Kernel methods in machine learning
- Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- CUR matrix decompositions for improved data analysis
- Fast monte-carlo algorithms for finding low-rank approximations
- Randomized Algorithms for Matrices and Data
- Title not available (Why is that?)
- Learning the kernel matrix with semidefinite programming
- A kernel-independent adaptive fast multipole algorithm in two and three dimensions
- Efficient Algorithms for Computing a Strong Rank-Revealing QR Factorization
- On the Compression of Low Rank Matrices
- The Fast Gauss Transform
- Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
- A randomized algorithm for the decomposition of matrices
- A randomized approximate nearest neighbors algorithm
- Title not available (Why is that?)
- Adaptive Sampling and Fast Low-Rank Matrix Approximation
- Fast algorithms for classical physics
- Fast approximation of the discrete Gauss transform in higher dimensions
- The fast generalized Gauss transform
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
- A fast summation tree code for Matérn kernel
- Far-field compression for fast kernel summation methods in high dimensions
- A distributed kernel summation framework for general‐dimension machine learning
Cited In (6)
- Far-field compression for fast kernel summation methods in high dimensions
- Hierarchically compositional kernels for scalable nonparametric learning
- ASKIT
- Algorithmic patterns for \(\mathcal {H}\)-matrices on many-core processors
- Fast approximation of the Gauss-Newton Hessian matrix for the multilayer perceptron
- ASKIT: approximate skeletonization kernel-independent treecode in high dimensions
Uses Software
This page was built for publication: ASKIT: an efficient, parallel library for high-dimensional kernel summations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2830640)