Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
From MaRDI portal
Publication:2986323
DOI10.1109/TIT.2014.2314676zbMATH Open1360.94220arXiv1212.1707OpenAlexW2763589418MaRDI QIDQ2986323FDOQ2986323
Sekhar Tatikonda, Tuhin Sarkar, Ramji Venkataramanan
Publication date: 16 May 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.
Full work available at URL: https://arxiv.org/abs/1212.1707
Cited In (1)
This page was built for publication: Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2986323)