Real-time photoacoustic projection imaging using deep learning

From MaRDI portal
Publication:6296707

arXiv1801.06693MaRDI QIDQ6296707FDOQ6296707


Authors: Johannes Schwab, Stephan Antholzer, Robert Nuster, Markus Haltmeier Edit this on Wikidata


Publication date: 20 January 2018

Abstract: Photoacoustic tomography (PAT) is an emerging and non-invasive hybrid imaging modality for visualizing light absorbing structures in biological tissue. The recently invented PAT systems using arrays of 64 parallel integrating line detectors allow capturing photoacoustic projection images in fractions of a second. Standard image formation algorithms for this type of setup suffer from under-sampling due to the sparse detector array, blurring due to the finite impulse response of the detection system, and artifacts due to the limited detection view. To address these issues, in this paper we develop a new direct and non-iterative image reconstruction framework using deep learning. The proposed DALnet combines the universal backprojection (UBP) using dynamic aperture length (DAL) correction with a deep convolutional neural network (CNN). Both subnetworks contain free parameters that are adjusted in the training phase. As demonstrated by simulation and experiment, the DALnet is capable of producing high-resolution projection images of 3D structures at a frame rate of over 50 images per second on a standard PC with NVIDIA TITAN Xp GPU. The proposed network is shown to outperform state-of-the-art iterative total variation reconstruction algorithms in terms of reconstruction speed as well as in terms of various evaluation metrics.













This page was built for publication: Real-time photoacoustic projection imaging using deep learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6296707)