Properly-weighted graph Laplacian for semi-supervised learning
From MaRDI portal
Publication:2019913
Abstract: The performance of traditional graph Laplacian methods for semi-supervised learning degrades substantially as the ratio of labeled to unlabeled data decreases, due to a degeneracy in the graph Laplacian. Several approaches have been proposed recently to address this, however we show that some of them remain ill-posed in the large-data limit. In this paper, we show a way to correctly set the weights in Laplacian regularization so that the estimator remains well posed and stable in the large-sample limit. We prove that our semi-supervised learning algorithm converges, in the infinite sample size limit, to the smooth solution of a continuum variational problem that attains the labeled values continuously. Our method is fast and easy to implement.
Recommendations
Cites work
- scientific article; zbMATH DE number 3930566 (Why is no real title available?)
- scientific article; zbMATH DE number 1049350 (Why is no real title available?)
- scientific article; zbMATH DE number 1865939 (Why is no real title available?)
- A first course in Sobolev spaces
- A transportation \(L^p\) distance for signal analysis
- A variational approach to the consistency of spectral clustering
- An introduction to \(\Gamma\)-convergence
- Analysis of \(p\)-Laplacian regularization in semisupervised learning
- Consistency of Cheeger and ratio graph cuts
- Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data
- Continuum limit of total variation on point clouds
- Determining Intrinsic Dimension and Entropy of High-Dimensional Shape Spaces
- Distance-based classification with Lipschitz functions
- Error estimates for spectral convergence of the graph Laplacian on random geometric graphs toward the Laplace-Beltrami operator
- From graph to manifold Laplacian: the convergence rate
- Large data and zero noise limits of graph-based semi-supervised learning algorithms
- Minimax grid matching and empirical measures
- Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems
- On optimal matchings
- On the Rate of Convergence of Empirical Measures in ∞-transportation Distance
- Optimal Transport
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling
- The game theoretic \(p\)-Laplacian and semi-supervised learning with few labels
- Tight bounds for minimax grid matching with applications to the average case analysis of algorithms
- Uncertainty quantification in graph-based classification of high dimensional data
- Upper and lower bounds for stochastic processes. Modern methods and classical problems
- Weighted nonlocal Laplacian on interpolation from sparse data
Cited in
(24)- Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning
- From graph cuts to isoperimetric inequalities: convergence rates of Cheeger cuts on data clouds
- Semi-supervised learning with local and global consistency
- Posterior consistency of semi-supervised regression on graphs
- HesGCN: Hessian graph convolutional networks for semi-supervised classification
- Large data and zero noise limits of graph-based semi-supervised learning algorithms
- On the effectiveness of Laplacian normalization for graph semi-supervised learning
- Weighted co-association rate-based Laplacian regularized label description for semi-supervised regression
- Ratio convergence rates for Euclidean first-passage percolation: applications to the graph infinity Laplacian
- A continuum limit for the PageRank algorithm
- Boundary estimation from point clouds: algorithms, guarantees and applications
- Semi-supervised learning with regularized Laplacian
- A spectral approach to the shortest path problem
- Consistency of fractional graph-Laplacian regularization in semisupervised learning with finite labels
- Lipschitz regularity of graph Laplacians on random data clouds
- A maximum principle argument for the uniform convergence of graph Laplacian regressors
- Gromov-Hausdorff limit of Wasserstein spaces on point clouds
- Laplacian-optimized diffusion for semi-supervised learning
- Multiscale elliptic PDE upscaling and function approximation via subsampled data
- On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms
- Learning Theory
- Adaptive edge weighting for graph-based learning algorithms
- Semi-supervised eigenvectors for large-scale locally-biased learning
- Performance analysis of the LapRSSLG algorithm in learning theory
This page was built for publication: Properly-weighted graph Laplacian for semi-supervised learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019913)