Abstract: We analyze a class of norms defined via an optimal interpolation problem involving the composition of norms and a linear operator. This construction, known as infimal postcomposition in convex analysis, is shown to encompass various of norms which have been used as regularizers in machine learning, signal processing, and statistics. In particular, these include the latent group lasso, the overlapping group lasso, and certain norms used for learning tensors. We establish basic properties of this class of norms and we provide dual norms. The extension to more general classes of convex functions is also discussed. A stochastic block-coordinate version of the Douglas-Rachford algorithm is devised to solve minimization problems involving these regularizers. A prominent feature of the algorithm is that it yields iterates that converge to a solution in the case of non smooth losses and random block updates. Finally, we present numerical experiments with problems employing the latent group lasso penalty.
Recommendations
Cites work
- scientific article; zbMATH DE number 1807400 (Why is no real title available?)
- scientific article; zbMATH DE number 5500983 (Why is no real title available?)
- scientific article; zbMATH DE number 3602126 (Why is no real title available?)
- A new approach in interpolation spaces
- An algorithm for splitting parallel sums of linearly composed monotone operators, with applications to signal recovery
- Background Subtraction Based on Low-Rank and Structured Sparse Decomposition
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Convex multi-task feature learning
- Exploring Structured Sparsity by a Reweighted Laplace Prior for Hyperspectral Compressive Sensing
- Feature space perspectives for learning the kernel
- Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems
- Learning the kernel function via regularization
- Learning with tensors: a framework based on convex optimization and spectral regularization
- Model Selection and Estimation in Regression with Grouped Variables
- New perspectives on \(k\)-support and cluster norms
- On spectral learning
- Optimization with sparsity-inducing penalties
- Prolongements de foncteurs d'interpolation et applications
- Proximal methods for hierarchical sparse coding
- Proximity algorithms for image models: denoising
- Regularizers for structured sparsity
- Robust 2D Principal Component Analysis: A Structured Sparsity Regularized Approach
- Sparsity and Smoothness Via the Fused Lasso
- Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping
- Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. II: Mean-square and linear convergence
- Structured sparsity and generalization
- Super-Resolution With Sparse Mixing Estimators
- Systems of Structured Monotone Inclusions: Duality, Algorithms, and Applications
- Tensor Decompositions and Applications
- Tensor completion and low-\(n\)-rank tensor recovery via convex optimization
- The composite absolute penalties family for grouped and hierarchical variable selection
- The convex geometry of linear inverse problems
- Using Block Norms for Location Modeling
This page was built for publication: Learning with optimal interpolation norms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2420165)