Sensitivity analysis for mirror-stratifiable convex functions
From MaRDI portal
Publication:4554065
Abstract: This paper provides a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coined "mirror-stratifiable". These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this "mirror-stratifiable" structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata.We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations, allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.
Recommendations
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 176973 (Why is no real title available?)
- scientific article; zbMATH DE number 1502618 (Why is no real title available?)
- scientific article; zbMATH DE number 2155014 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 3307153 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A wavelet tour of signal processing. The sparse way.
- Active Sets, Nonsmoothness, and Sensitivity
- Activity identification and local linear convergence of forward-backward-type methods
- Atomic Decomposition by Basis Pursuit
- Confexifying the counting function on \(\mathbb R^{p}\) for convexifying the rank function on \(M_{m,n}(\mathbb R)\)
- Consistency of trace norm minimization
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Convexifying the set of matrices of bounded rank: applications to the quasiconvexification and convexification of the rank function
- Decoding by Linear Programming
- Eventual linear convergence of the Douglas-Rachford iteration for basis pursuit
- Exact matrix completion via convex optimization
- Exact support recovery for sparse spikes deconvolution
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Generic minimizing behavior in semialgebraic optimization
- Gradient-based algorithms with applications to signal-recovery problems
- Identifying active manifolds in regularization problems
- Linear convergence of iterative soft-thresholding
- Local Linear Convergence of ISTA and FISTA on the LASSO Problem
- Local convergence properties of Douglas-Rachford and alternating direction method of multipliers
- Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs
- Low complexity regularization of linear inverse problems
- Model Consistency of Partly Smooth Regularizers
- Model Selection and Estimation in Regression with Grouped Variables
- Model selection with low complexity priors
- Necessary and sufficient conditions for linear convergence of \(\ell^1\)-regularization
- Newton methods for nonsmooth convex minimization: connections among \(\mathcal U\)-Lagrangian, Riemannian Newton and SQP methods
- On Sparse Representations in Arbitrary Redundant Bases
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Optimality, identifiability, and sensitivity
- Orthogonal invariance and identifiability
- Partial Smoothness, Tilt Stability, and Generalized Hessians
- Perturbations, approximations and sensitivity analysis of optimal control systems
- Proximal splitting methods in signal processing
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Solving monotone inclusions via compositions of nonexpansive averaged operators
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Statistics for high-dimensional data. Methods, theory and applications.
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- The rate of linear convergence of the Douglas-Rachford algorithm for subspaces is the cosine of the Friedrichs angle
- The 𝒰-Lagrangian of a convex function
- Towards a Mathematical Theory of Super‐resolution
- Variational Analysis
- Variational methods in imaging
Cited in
(10)- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- Differentiating Nonsmooth Solutions to Parametric Monotone Inclusion Problems
- On the interplay between acceleration and identification for the proximal gradient algorithm
- Local linear convergence of proximal coordinate descent algorithm
- Proximal gradient methods with adaptive subspace sampling
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- Exact recovery of the support of piecewise constant images via total variation regularization
- On the strong convergence of forward-backward splitting in reconstructing jointly sparse signals
- Active-set Newton methods and partial smoothness
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
This page was built for publication: Sensitivity analysis for mirror-stratifiable convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4554065)