Learning with optimal interpolation norms

From MaRDI portal
Publication:2420165

DOI10.1007/S11075-018-0568-1zbMATH Open1454.90049arXiv1603.09273OpenAlexW2753852770WikidataQ129562610 ScholiaQ129562610MaRDI QIDQ2420165FDOQ2420165


Authors: Patrick L. Combettes, Andrew M. Mcdonald, Charles A. Micchelli, Massimiliano Pontil Edit this on Wikidata


Publication date: 5 June 2019

Published in: Numerical Algorithms (Search for Journal in Brave)

Abstract: We analyze a class of norms defined via an optimal interpolation problem involving the composition of norms and a linear operator. This construction, known as infimal postcomposition in convex analysis, is shown to encompass various of norms which have been used as regularizers in machine learning, signal processing, and statistics. In particular, these include the latent group lasso, the overlapping group lasso, and certain norms used for learning tensors. We establish basic properties of this class of norms and we provide dual norms. The extension to more general classes of convex functions is also discussed. A stochastic block-coordinate version of the Douglas-Rachford algorithm is devised to solve minimization problems involving these regularizers. A prominent feature of the algorithm is that it yields iterates that converge to a solution in the case of non smooth losses and random block updates. Finally, we present numerical experiments with problems employing the latent group lasso penalty.


Full work available at URL: https://arxiv.org/abs/1603.09273




Recommendations




Cites Work


Cited In (1)





This page was built for publication: Learning with optimal interpolation norms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2420165)