Consistency of Lipschitz Learning with Infinite Unlabeled Data and Finite Labeled Data

From MaRDI portal
Publication:5025772

DOI10.1137/18M1199241zbMath1499.35598arXiv1710.10364OpenAlexW2992402078WikidataQ126638071 ScholiaQ126638071MaRDI QIDQ5025772

Jeff Calder

Publication date: 3 February 2022

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1710.10364



Related Items

Lipschitz Regularity of Graph Laplacians on Random Data Clouds, Convergence of Physics-Informed Neural Networks Applied to Linear Second-Order Elliptic Interface Problems, A continuum limit for the PageRank algorithm, Multiscale Elliptic PDE Upscaling and Function Approximation via Subsampled Data, Analysis and algorithms for \(\ell_p\)-based semi-supervised learning on graphs, Improved spectral convergence rates for graph Laplacians on \(\varepsilon \)-graphs and \(k\)-NN graphs, Boundary estimation from point clouds: algorithms, guarantees and applications, Data-driven method to learn the most probable transition pathway and stochastic differential equation, Solving an inverse source problem by deep neural network method with convergence and error analysis, Rates of convergence for Laplacian semi-supervised learning with low labeling rates, Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning, The infinity Laplacian eigenvalue problem: reformulation and a numerical scheme, Continuum limit of Lipschitz learning on graphs, Function approximation via the subsampled Poincaré inequality, Gromov-Hausdorff limit of Wasserstein spaces on point clouds, On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs, Properly-weighted graph Laplacian for semi-supervised learning, Iterative surrogate model optimization (ISMO): an active learning algorithm for PDE constrained optimization with deep neural networks



Cites Work