DeepJSCC-l++: Robust and Bandwidth-Adaptive Wireless Image Transmission
From MaRDI portal
Publication:6437516
arXiv2305.13161MaRDI QIDQ6437516FDOQ6437516
Authors: Chenghong Bian, Yulin Shao, Deniz Gunduz
Publication date: 22 May 2023
Abstract: This paper presents a novel vision transformer (ViT) based deep joint source channel coding (DeepJSCC) scheme, dubbed DeepJSCC-l++, which can be adaptive to multiple target bandwidth ratios as well as different channel signal-to-noise ratios (SNRs) using a single model. To achieve this, we train the proposed DeepJSCC-l++ model with different bandwidth ratios and SNRs, which are fed to the model as side information. The reconstruction losses corresponding to different bandwidth ratios are calculated, and a new training methodology is proposed, which dynamically assigns different weights to the losses of different bandwidth ratios according to their individual reconstruction qualities. Shifted window (Swin) transformer, is adopted as the backbone for our DeepJSCC-l++ model. Through extensive simulations it is shown that the proposed DeepJSCC-l++ and successive refinement models can adapt to different bandwidth ratios and channel SNRs with marginal performance loss compared to the separately trained models. We also observe the proposed schemes can outperform the digital baseline, which concatenates the BPG compression with capacity-achieving channel code.
Has companion code repository: https://github.com/aprilbian/deepjscc-lplusplus
This page was built for publication: DeepJSCC-l++: Robust and Bandwidth-Adaptive Wireless Image Transmission
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6437516)