Large data and zero noise limits of graph-based semi-supervised learning algorithms

From MaRDI portal
Publication:778036

DOI10.1016/J.ACHA.2019.03.005zbMATH Open1442.62768arXiv1805.09450OpenAlexW2963110350WikidataQ128098408 ScholiaQ128098408MaRDI QIDQ778036FDOQ778036


Authors: M. M. Dunlop, Dejan Slepčev, Matthew Thorpe, A. M. Stuart Edit this on Wikidata


Publication date: 30 June 2020

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Abstract: Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied. Both optimization and Bayesian approaches are considered, based around a regularizing quadratic form found from an affine transformation of the Laplacian, raised to a, possibly fractional, exponent. Conditions on the parameters defining this quadratic form are identified under which well-defined limiting continuum analogues of the optimization and Bayesian semi-supervised learning problems may be found, thereby shedding light on the design of algorithms in the large graph setting. The large graph limits of the optimization formulations are tackled through Gammaconvergence, using the recently introduced TLp metric. The small labelling noise limits of the Bayesian formulations are also identified, and contrasted with pre-existing harmonic function approaches to the problem.


Full work available at URL: https://arxiv.org/abs/1805.09450




Recommendations




Cites Work


Cited In (26)

Uses Software





This page was built for publication: Large data and zero noise limits of graph-based semi-supervised learning algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q778036)