Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning
From MaRDI portal
Publication:6151664
Learning and adaptive systems in artificial intelligence (68T05) Random graphs (graph-theoretic aspects) (05C80) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87) Extremal problems in graph theory (05C35) PDEs in connection with statistics (35Q62)
Abstract: We show that uncertainty sampling is sufficient to achieve exploration versus exploitation in graph-based active learning, as long as the measure of uncertainty properly aligns with the underlying model and the model properly reflects uncertainty in unexplored regions. In particular, we use a recently developed algorithm, Poisson ReWeighted Laplace Learning (PWLL) for the classifier and we introduce an acquisition function designed to measure uncertainty in this graph-based classifier that identifies unexplored regions of the data. We introduce a diagonal perturbation in PWLL which produces exponential localization of solutions, and controls the exploration versus exploitation tradeoff in active learning. We use the well-posed continuum limit of PWLL to rigorously analyze our method, and present experimental results on a number of graph-based image classification problems.
Recommendations
Cites work
- Active learning
- Agnostic active learning
- An Introduction to Variational Autoencoders
- Analysis and algorithms for \(\ell_p\)-based semi-supervised learning on graphs
- Analysis of \(p\)-Laplacian regularization in semisupervised learning
- Cautious active clustering
- Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data
- Diffuse interface models on graphs for classification of high dimensional data
- Diffusion maps
- Error estimates for spectral convergence of the graph Laplacian on random geometric graphs toward the Laplace-Beltrami operator
- Graph Laplacians and their convergence on random neighborhood graphs
- Graph-based optimization approaches for machine learning, uncertainty quantification and networks
- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data
- Improved spectral convergence rates for graph Laplacians on \(\varepsilon \)-graphs and \(k\)-NN graphs
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Large data and zero noise limits of graph-based semi-supervised learning algorithms
- Learning Theory
- Learning Theory
- Margin Based Active Learning
- Minimax analysis of active learning
- Properly-weighted graph Laplacian for semi-supervised learning
- Rates of convergence for Laplacian semi-supervised learning with low labeling rates
- Semi-supervised learning on Riemannian manifolds
- Support vector machine active learning with applications to text classification
- The game theoretic \(p\)-Laplacian and semi-supervised learning with few labels
- Theory of Disagreement-Based Active Learning
- Two faces of active learning
- Weighted nonlocal Laplacian on interpolation from sparse data
This page was built for publication: Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6151664)