Properly-weighted graph Laplacian for semi-supervised learning

From MaRDI portal
Publication:2019913




Abstract: The performance of traditional graph Laplacian methods for semi-supervised learning degrades substantially as the ratio of labeled to unlabeled data decreases, due to a degeneracy in the graph Laplacian. Several approaches have been proposed recently to address this, however we show that some of them remain ill-posed in the large-data limit. In this paper, we show a way to correctly set the weights in Laplacian regularization so that the estimator remains well posed and stable in the large-sample limit. We prove that our semi-supervised learning algorithm converges, in the infinite sample size limit, to the smooth solution of a continuum variational problem that attains the labeled values continuously. Our method is fast and easy to implement.



Cites work


Cited in
(24)


Describes a project that uses

Uses Software





This page was built for publication: Properly-weighted graph Laplacian for semi-supervised learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019913)