A unifying representer theorem for inverse problems and machine learning

From MaRDI portal
Revision as of 04:51, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2231644

DOI10.1007/S10208-020-09472-XzbMath1479.46088arXiv1903.00687OpenAlexW3088481514WikidataQ130240264 ScholiaQ130240264MaRDI QIDQ2231644

Michael Unser

Publication date: 30 September 2021

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1903.00687






Related Items (23)

A superposition principle for the inhomogeneous continuity equation with Hellinger–Kantorovich-regular coefficientsNeural network approximationTwo-Layer Neural Networks with Values in a Banach SpaceParameter choices for sparse regularization with the ℓ1 norm *What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline TheoryExplicit representations for Banach subspaces of Lizorkin distributionsA generalized conditional gradient method for dynamic inverse problems with optimal transport regularizationSparse machine learning in Banach spacesLinear inverse problems with Hessian-Schatten total variationOptimal learningA duality approach to regularized learning problems in Banach spacesThe Geometry of Sparse Analysis RegularizationRegularization method for the generalized moment problem in a functional reproducing kernel Hilbert spaceFunctions with bounded Hessian-Schatten variation: density, variational, and extremality propertiesOn the determination of Lagrange multipliers for a weighted Lasso problem using geometric and convex analysis techniquesWeighted variation spaces and approximation by shallow ReLU networksUnnamed ItemDistributional extension and invertibility of the \(k\)-plane transform and its dualSplitting method for support vector machine in reproducing kernel Banach space with a lower semi-continuous loss functionFrom kernel methods to neural networks: a unifying variational formulationFunctional estimation of anisotropic covariance and autocovariance operators on the sphereMinimum norm interpolation in the ℓ1(ℕ) spaceOn the extremal points of the ball of the Benamou–Brenier energy




Cites Work




This page was built for publication: A unifying representer theorem for inverse problems and machine learning