Structured sparsity: discrete and convex approaches
From MaRDI portal
Publication:3460840
DOI10.1007/978-3-319-16042-9_12zbMATH Open1333.94021arXiv1507.05367OpenAlexW1037812049MaRDI QIDQ3460840FDOQ3460840
Authors: Anastasios Kyrillidis, Luca Baldassarre, Marwa El Halabi, Quoc Tran Dinh, Volkan Cevher
Publication date: 8 January 2016
Published in: Compressed Sensing and its Applications (Search for Journal in Brave)
Abstract: Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.
Full work available at URL: https://arxiv.org/abs/1507.05367
Recommendations
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Least angle regression. (With discussion)
- Numerical Optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Learning deep architectures for AI
- Atomic Decomposition by Basis Pursuit
- Sparsity and Smoothness Via the Fused Lasso
- Model Selection and Estimation in Regression with Grouped Variables
- Smooth minimization of non-smooth functions
- On the distribution of the largest eigenvalue in principal components analysis
- Learning Bayesian networks: The combination of knowledge and statistical data
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- Title not available (Why is that?)
- The Group Lasso for Logistic Regression
- Title not available (Why is that?)
- Matching pursuits with time-frequency dictionaries
- Proximal methods for hierarchical sparse coding
- Compressive sampling
- The composite absolute penalties family for grouped and hierarchical variable selection
- Primal-dual subgradient methods for convex problems
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Sparse group Lasso and high dimensional multinomial classification
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Compressed sensing
- The benefit of group sparsity
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Sparse Reconstruction by Separable Approximation
- Signal Recovery by Proximal Forward-Backward Splitting
- Iterative hard thresholding for compressed sensing
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Accelerated and inexact forward-backward algorithms
- Strongly Regular Generalized Equations
- A generalized proximal point algorithm for certain non-convex minimization problems
- Title not available (Why is that?)
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Title not available (Why is that?)
- Local analysis of Newton-type methods for variational inequalities and nonlinear programming
- Beyond the flow decomposition barrier
- Title not available (Why is that?)
- Sparse Approximate Solutions to Linear Systems
- Principles of Optics
- Spiking Neuron Models
- A faster strongly polynomial time algorithm for submodular function minimization
- Realization of set functions as cut functions of graphs and hypergraphs
- The convex geometry of linear inverse problems
- An analysis of approximations for maximizing submodular set functions—I
- Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage
- On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements
- Multiscale Mining of fMRI Data with Hierarchical Structured Sparsity
- Structured variable selection with sparsity-inducing norms
- Convex analysis and nonlinear optimization. Theory and examples.
- Covariance selection for nonchordal graphs via chordal embedding
- Matrix recipes for hard thresholding methods
- Learning with Structured Sparsity
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Model-Based Compressive Sensing
- Hard thresholding pursuit: an algorithm for compressive sensing
- Near best tree approximation
- Robust Recovery of Signals From a Structured Union of Subspaces
- Embedded image coding using zerotrees of wavelet coefficients
- Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing
- Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces
- Composite self-concordant minimization
- On model-based RIP-1 matrices
- Learning with submodular functions: a convex optimization perspective
- Group-Sparse Model Selection: Hardness and Relaxations
- Submodular maximization with cardinality constraints
- A submodular function minimization algorithm based on the minimum-norm base
- Model-based sketching and recovery with expanders
- Order-Preserving Factor Analysis—Application to Longitudinal Gene Expression
Cited In (13)
- Structured sparsity promoting functions
- Title not available (Why is that?)
- Fast algorithms for structured sparsity (ICALP 2015 invited tutorial)
- Weakly decomposable regularization penalties and structured sparsity
- Learning the Structure for Structured Sparsity
- Learning with Structured Sparsity
- Regularizers for structured sparsity
- Disjointness through the lens of Vapnik-Chervonenkis dimension: sparsity and beyond
- An overview of computational sparse models and their applications in artificial intelligence
- Theoretical guarantees for graph sparse coding
- Sparsity constrained estimation in image processing and computer vision
- A single-phase, proximal path-following framework
- Structured sparsity through convex optimization
Uses Software
This page was built for publication: Structured sparsity: discrete and convex approaches
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3460840)