GPU acceleration of Hermite methods for the simulation of wave propagation
From MaRDI portal
Abstract: The Hermite methods of Goodrich, Hagstrom, and Lorenz (2006) use Hermite interpolation to construct high order numerical methods for hyperbolic initial value problems. The structure of the method has several favorable features for parallel computing. In this work, we propose algorithms that take advantage of the many-core architecture of Graphics Processing Units. The algorithm exploits the compact stencil of Hermite methods and uses data structures that allow for efficient data load and stores. Additionally the highly localized evolution operator of Hermite methods allows us to combine multi-stage time-stepping methods within the new algorithms incurring minimal accesses of global memory. Using a scalar linear wave equation, we study the algorithm by considering Hermite interpolation and evolution as individual kernels and alternatively combined them into a monolithic kernel. For both approaches we demonstrate strategies to increase performance. Our numerical experiments show that although a two kernel approach allows for better performance on the hardware, a monolithic kernel can offer a comparable time to solution with less global memory usage.
Recommendations
Cited in
(6)- Hermite Methods for the Scalar Wave Equation
- Graphics processing unit acceleration of the random phase approximation in the projector augmented wave method
- Fast evaluation of Helmholtz potential on graphics processing units (GPUs)
- GPU-acceleration of waveform relaxation methods for large differential systems
- JDiffraction: a GPGPU-accelerated JAVA library for numerical propagation of scalar wave fields
- Leapfrog time-stepping for Hermite methods
This page was built for publication: GPU acceleration of Hermite methods for the simulation of wave propagation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1696684)