Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems
From MaRDI portal
Publication:2979183
DOI10.1109/TIT.2014.2364403zbMATH Open1359.94838arXiv1311.6239OpenAlexW2140089781MaRDI QIDQ2979183FDOQ2979183
Authors: Anthony Bourrier, Tomer Peleg, M. E. Davies, Patrick Pérez, Rémi Gribonval
Publication date: 2 May 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, ie, a general subset of "simple" vectors of interest. First, we extend the so-called notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high dimensional vector from a low-dimensional observation. This covers practical settings such as medical imaging of a region of interest, or audio source separation when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noise-robust settings are both considered. We show somewhat surprisingly that the existence of noise-aware instance optimal decoders for all noise levels implies the existence of a noise-blind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an L2/L2 instance optimal decoder is only possible when the linear operator is not substantially dimension-reducing. This covers well-known cases (sparse vectors, low-rank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operator-dependent norm which, under a model-specific generalization of the Restricted Isometry Property (RIP), always yields a feasible instance optimality property. This norm can be upper bounded by an atomic norm relative to the considered model.
Full work available at URL: https://arxiv.org/abs/1311.6239
Cited In (13)
- Compressed sensing with local structure: uniform recovery guarantees for the sparsity in levels class
- Infinite-dimensional compressed sensing and function interpolation
- Generalized sampling and infinite-dimensional compressed sensing
- The quest for optimal sampling: computationally efficient, structure-exploiting measurements for compressed sensing
- Multilinear compressive sensing and an application to convolutional linear networks
- Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all
- Compressive statistical learning with random feature moments
- Performance bounds of the intensity-based estimators for noisy phase retrieval
- A theory of optimal convex regularization for low-dimensional recovery
- The basins of attraction of the global minimizers of the non-convex sparse spike estimation problem
- On the Absence of Uniform Recovery in Many Real-World Applications of Compressed Sensing and the Restricted Isometry Property and Nullspace Property in Levels
- Learning with optimal interpolation norms
- Breaking the coherence barrier: a new theory for compressed sensing
This page was built for publication: Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2979183)