Continuum Limit of Lipschitz Learning on Graphs

From MaRDI portal
Publication:6355474

DOI10.1007/S10208-022-09557-9arXiv2012.03772MaRDI QIDQ6355474FDOQ6355474


Authors: Tim Roith, Leon Bungert Edit this on Wikidata


Publication date: 7 December 2020

Abstract: Tackling semi-supervised learning problems with graph-based methods has become a trend in recent years since graphs can represent all kinds of data and provide a suitable framework for studying continuum limits, e.g., of differential operators. A popular strategy here is p-Laplacian learning, which poses a smoothness condition on the sought inference function on the set of unlabeled data. For p<infty continuum limits of this approach were studied using tools from Gamma-convergence. For the case p=infty, which is referred to as Lipschitz learning, continuum limits of the related infinity-Laplacian equation were studied using the concept of viscosity solutions. In this work, we prove continuum limits of Lipschitz learning using Gamma-convergence. In particular, we define a sequence of functionals which approximate the largest local Lipschitz constant of a graph function and prove Gamma-convergence in the Linfty-topology to the supremum norm of the gradient as the graph becomes denser. Furthermore, we show compactness of the functionals which implies convergence of minimizers. In our analysis we allow a varying set of labeled data which converges to a general closed set in the Hausdorff distance. We apply our results to nonlinear ground states, i.e., minimizers with constrained Lp-norm, and, as a by-product, prove convergence of graph distance functions to geodesic distance functions.













This page was built for publication: Continuum Limit of Lipschitz Learning on Graphs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6355474)