Lossy Compression via Sparse Linear Regression: Performance Under Minimum-Distance Encoding

From MaRDI portal
Publication:2986321

DOI10.1109/TIT.2014.2313085zbMATH Open1360.94219arXiv1202.0840OpenAlexW3103067640MaRDI QIDQ2986321FDOQ2986321

Sekhar Tatikonda, Ramji Venkataramanan, Antony Joseph

Publication date: 16 May 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance sigma2 with (squared-error) distortion D can compress any ergodic source of variance less than sigma2 to within distortion D. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.


Full work available at URL: https://arxiv.org/abs/1202.0840







Cited In (1)





This page was built for publication: Lossy Compression via Sparse Linear Regression: Performance Under Minimum-Distance Encoding

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2986321)