Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs

From MaRDI portal
Publication:548974

DOI10.1016/J.CPC.2010.06.019zbMATH Open1220.85002arXiv1003.5575OpenAlexW2126734610WikidataQ59391807 ScholiaQ59391807MaRDI QIDQ548974FDOQ548974


Authors: Yong-Cai Geng, Sumit K. Garg Edit this on Wikidata


Publication date: 30 June 2011

Published in: Computer Physics Communications (Search for Journal in Brave)

Abstract: The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.


Full work available at URL: https://arxiv.org/abs/1003.5575




Recommendations





Uses Software





This page was built for publication: Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q548974)