A mathematical introduction to compressive sensing (Q351503): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Set OpenAlex properties.
 
(4 intermediate revisions by 4 users not shown)
Property / review text
 
There are two important features of this book. The first feature is the set of ``Notes'' sections presented at the end of each chapter. These Notes supplement the relevant references, historical remarks, additional facts and open questions studied in the respective chapters. They can be used to expand the students' knowledge and understanding or as an opportunity to conduct a seminar. The second feature is the collection of exercises spread throughout the chapters, being an essential component for students to learn the presented stuff. Moreover, the book concludes with three appendices that cover basic material from matrix analysis, convex analysis, and other interesting topics that hint at advanced areas of mathematics. The book contains fifteen chapters. The first chapter gives a brief description of compressive sensing with some motivations, applications and a detailed overview of the book. In Chapter 2 the notions of vector sparsity and compressibility with some basic algorithms are given. Also, the minimal number of linear measurements required to recover sparse vectors is investigated in two different settings. Finally, it is proved that \(\ell_0\)-minimization, the ideal recovery scheme, is NP-hard in general. In Chapter 3 a popular algorithm used in compressive sensing is presented. This algorithm covers three methods: optimization methods, greedy methods, and thresholding-based methods. Chapter 4 is focussed on the basis pursuit (\(\ell_1\)-minimization) strategy, which consists in solving a convex optimization problem. Necessary and sufficient conditions for exact or approximate reconstruction of the original sparse or compressible vector are discussed. Also, the recovery of sparse vectors via basis pursuit is interpreted in terms of polytope geometry. In Chapter 5 the notion of coherence of a matrix and some of its generalizations are introduced. The authors investigate how small the coherence can be and they give some matrices with small coherence. Finally, they discuss some sufficient conditions expressed in terms of the coherence that guarantee the success of orthogonal matching pursuit, basis pursuit, and thresholding algorithms. In Chapter 6 the concept of restricted isometry property is introduced, which is known as a uniform uncertainty principle. The success of sparse recovery is established under some conditions on restricted isometry constants for basis pursuit, thresholding-based algorithms and for greedy algorithms. Chapter 7 deals with the basic facts of probability. The relation of moments of random variables to their tails and deviation inequalities for sums of independent random variables by means of moment-generating functions are studied. Chapter 8 incorporates the expectation of the \(\ell_p\)-norm of a standard Gaussian vector for \(p = 1,2,\infty\). Some simple results for Rademacher sums as well as the symmetrization method are presented. Decoupling inequalities are introduced to replace one sequence of random variables in a double sum by an independent copy. The scalar Bernstein inequality for bounded random variables is extended to a powerful deviation inequality for the operator norm of sums of random matrices. Finally, a deviation inequality for the supremum of an empirical process, which is sometimes called Talagrand's concentration inequality or Bousquet's inequality, is discussed. The sparse recovery based on random matrices and some related topics are discussed in Chapters 9--12. In Chapter 13 another type of matrices (adjacency matrices of certain bipartite graphs) are introduced. These matrices, called lossless expanders, can be used when reconstructing sparse vectors from a limited number of measurements. It is proved that, using adjacency matrices as measurement matrices, a stable and robust reconstruction of sparse vectors via \(\ell_1\)-minimization can be obtained. Finally, the stability and robustness of a thresholding-based algorithm and a simple sublinear-time algorithm are analyzed. Chapter 14 deals with recovery of random sparse signals with deterministic matrices. The results of this chapter are weaker than the ones of Chapter 5 in the sense that they apply only to most signals instead of all signals, but they show that the deterministic bounds using coherence may be somewhat pessimistic even if no bounds on the restricted isometry constants are available. Also, bounds on the conditioning of a random column submatrix of a given matrix are derived. The \(\ell_1\)-minimization plays a key role as a recovery method for compressive sensing. Algorithms designed specifically for \(\ell_1\)-minimization may be faster than general-purpose methods. In Chapter 15 several of these algorithms are introduced and analyzed. The picture of compressive sensing is the important feature of this book. The material presented here represents a solid foundation for the mathematical theory of compressive sensing.
Property / review text: There are two important features of this book. The first feature is the set of ``Notes'' sections presented at the end of each chapter. These Notes supplement the relevant references, historical remarks, additional facts and open questions studied in the respective chapters. They can be used to expand the students' knowledge and understanding or as an opportunity to conduct a seminar. The second feature is the collection of exercises spread throughout the chapters, being an essential component for students to learn the presented stuff. Moreover, the book concludes with three appendices that cover basic material from matrix analysis, convex analysis, and other interesting topics that hint at advanced areas of mathematics. The book contains fifteen chapters. The first chapter gives a brief description of compressive sensing with some motivations, applications and a detailed overview of the book. In Chapter 2 the notions of vector sparsity and compressibility with some basic algorithms are given. Also, the minimal number of linear measurements required to recover sparse vectors is investigated in two different settings. Finally, it is proved that \(\ell_0\)-minimization, the ideal recovery scheme, is NP-hard in general. In Chapter 3 a popular algorithm used in compressive sensing is presented. This algorithm covers three methods: optimization methods, greedy methods, and thresholding-based methods. Chapter 4 is focussed on the basis pursuit (\(\ell_1\)-minimization) strategy, which consists in solving a convex optimization problem. Necessary and sufficient conditions for exact or approximate reconstruction of the original sparse or compressible vector are discussed. Also, the recovery of sparse vectors via basis pursuit is interpreted in terms of polytope geometry. In Chapter 5 the notion of coherence of a matrix and some of its generalizations are introduced. The authors investigate how small the coherence can be and they give some matrices with small coherence. Finally, they discuss some sufficient conditions expressed in terms of the coherence that guarantee the success of orthogonal matching pursuit, basis pursuit, and thresholding algorithms. In Chapter 6 the concept of restricted isometry property is introduced, which is known as a uniform uncertainty principle. The success of sparse recovery is established under some conditions on restricted isometry constants for basis pursuit, thresholding-based algorithms and for greedy algorithms. Chapter 7 deals with the basic facts of probability. The relation of moments of random variables to their tails and deviation inequalities for sums of independent random variables by means of moment-generating functions are studied. Chapter 8 incorporates the expectation of the \(\ell_p\)-norm of a standard Gaussian vector for \(p = 1,2,\infty\). Some simple results for Rademacher sums as well as the symmetrization method are presented. Decoupling inequalities are introduced to replace one sequence of random variables in a double sum by an independent copy. The scalar Bernstein inequality for bounded random variables is extended to a powerful deviation inequality for the operator norm of sums of random matrices. Finally, a deviation inequality for the supremum of an empirical process, which is sometimes called Talagrand's concentration inequality or Bousquet's inequality, is discussed. The sparse recovery based on random matrices and some related topics are discussed in Chapters 9--12. In Chapter 13 another type of matrices (adjacency matrices of certain bipartite graphs) are introduced. These matrices, called lossless expanders, can be used when reconstructing sparse vectors from a limited number of measurements. It is proved that, using adjacency matrices as measurement matrices, a stable and robust reconstruction of sparse vectors via \(\ell_1\)-minimization can be obtained. Finally, the stability and robustness of a thresholding-based algorithm and a simple sublinear-time algorithm are analyzed. Chapter 14 deals with recovery of random sparse signals with deterministic matrices. The results of this chapter are weaker than the ones of Chapter 5 in the sense that they apply only to most signals instead of all signals, but they show that the deterministic bounds using coherence may be somewhat pessimistic even if no bounds on the restricted isometry constants are available. Also, bounds on the conditioning of a random column submatrix of a given matrix are derived. The \(\ell_1\)-minimization plays a key role as a recovery method for compressive sensing. Algorithms designed specifically for \(\ell_1\)-minimization may be faster than general-purpose methods. In Chapter 15 several of these algorithms are introduced and analyzed. The picture of compressive sensing is the important feature of this book. The material presented here represents a solid foundation for the mathematical theory of compressive sensing. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Devendra Kumar / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 94-01 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 94A08 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 94A12 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6184802 / rank
 
Normal rank
Property / zbMATH Keywords
 
compressive sensing
Property / zbMATH Keywords: compressive sensing / rank
 
Normal rank
Property / zbMATH Keywords
 
sparse recovery
Property / zbMATH Keywords: sparse recovery / rank
 
Normal rank
Property / zbMATH Keywords
 
random variables
Property / zbMATH Keywords: random variables / rank
 
Normal rank
Property / zbMATH Keywords
 
lossless expanders
Property / zbMATH Keywords: lossless expanders / rank
 
Normal rank
Property / zbMATH Keywords
 
robustness
Property / zbMATH Keywords: robustness / rank
 
Normal rank
Property / zbMATH Keywords
 
Gaussian vector
Property / zbMATH Keywords: Gaussian vector / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: CoSaMP / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/978-0-8176-4948-7 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W143004564 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 22:01, 19 March 2024

scientific article
Language Label Description Also known as
English
A mathematical introduction to compressive sensing
scientific article

    Statements

    A mathematical introduction to compressive sensing (English)
    0 references
    0 references
    0 references
    5 July 2013
    0 references
    There are two important features of this book. The first feature is the set of ``Notes'' sections presented at the end of each chapter. These Notes supplement the relevant references, historical remarks, additional facts and open questions studied in the respective chapters. They can be used to expand the students' knowledge and understanding or as an opportunity to conduct a seminar. The second feature is the collection of exercises spread throughout the chapters, being an essential component for students to learn the presented stuff. Moreover, the book concludes with three appendices that cover basic material from matrix analysis, convex analysis, and other interesting topics that hint at advanced areas of mathematics. The book contains fifteen chapters. The first chapter gives a brief description of compressive sensing with some motivations, applications and a detailed overview of the book. In Chapter 2 the notions of vector sparsity and compressibility with some basic algorithms are given. Also, the minimal number of linear measurements required to recover sparse vectors is investigated in two different settings. Finally, it is proved that \(\ell_0\)-minimization, the ideal recovery scheme, is NP-hard in general. In Chapter 3 a popular algorithm used in compressive sensing is presented. This algorithm covers three methods: optimization methods, greedy methods, and thresholding-based methods. Chapter 4 is focussed on the basis pursuit (\(\ell_1\)-minimization) strategy, which consists in solving a convex optimization problem. Necessary and sufficient conditions for exact or approximate reconstruction of the original sparse or compressible vector are discussed. Also, the recovery of sparse vectors via basis pursuit is interpreted in terms of polytope geometry. In Chapter 5 the notion of coherence of a matrix and some of its generalizations are introduced. The authors investigate how small the coherence can be and they give some matrices with small coherence. Finally, they discuss some sufficient conditions expressed in terms of the coherence that guarantee the success of orthogonal matching pursuit, basis pursuit, and thresholding algorithms. In Chapter 6 the concept of restricted isometry property is introduced, which is known as a uniform uncertainty principle. The success of sparse recovery is established under some conditions on restricted isometry constants for basis pursuit, thresholding-based algorithms and for greedy algorithms. Chapter 7 deals with the basic facts of probability. The relation of moments of random variables to their tails and deviation inequalities for sums of independent random variables by means of moment-generating functions are studied. Chapter 8 incorporates the expectation of the \(\ell_p\)-norm of a standard Gaussian vector for \(p = 1,2,\infty\). Some simple results for Rademacher sums as well as the symmetrization method are presented. Decoupling inequalities are introduced to replace one sequence of random variables in a double sum by an independent copy. The scalar Bernstein inequality for bounded random variables is extended to a powerful deviation inequality for the operator norm of sums of random matrices. Finally, a deviation inequality for the supremum of an empirical process, which is sometimes called Talagrand's concentration inequality or Bousquet's inequality, is discussed. The sparse recovery based on random matrices and some related topics are discussed in Chapters 9--12. In Chapter 13 another type of matrices (adjacency matrices of certain bipartite graphs) are introduced. These matrices, called lossless expanders, can be used when reconstructing sparse vectors from a limited number of measurements. It is proved that, using adjacency matrices as measurement matrices, a stable and robust reconstruction of sparse vectors via \(\ell_1\)-minimization can be obtained. Finally, the stability and robustness of a thresholding-based algorithm and a simple sublinear-time algorithm are analyzed. Chapter 14 deals with recovery of random sparse signals with deterministic matrices. The results of this chapter are weaker than the ones of Chapter 5 in the sense that they apply only to most signals instead of all signals, but they show that the deterministic bounds using coherence may be somewhat pessimistic even if no bounds on the restricted isometry constants are available. Also, bounds on the conditioning of a random column submatrix of a given matrix are derived. The \(\ell_1\)-minimization plays a key role as a recovery method for compressive sensing. Algorithms designed specifically for \(\ell_1\)-minimization may be faster than general-purpose methods. In Chapter 15 several of these algorithms are introduced and analyzed. The picture of compressive sensing is the important feature of this book. The material presented here represents a solid foundation for the mathematical theory of compressive sensing.
    0 references
    compressive sensing
    0 references
    sparse recovery
    0 references
    random variables
    0 references
    lossless expanders
    0 references
    robustness
    0 references
    Gaussian vector
    0 references

    Identifiers