Class-prior estimation for learning from positive and unlabeled data
From MaRDI portal
Publication:2398088
Abstract: We consider the problem of estimating the class prior in an unlabeled dataset. Under the assumption that an additional labeled dataset is available, the class prior can be estimated by fitting a mixture of class-wise data distributions to the unlabeled data distribution. However, in practice, such an additional labeled dataset is often not available. In this paper, we show that, with additional samples coming only from the positive class, the class prior of the unlabeled dataset can be estimated correctly. Our key idea is to use properly penalized divergences for model fitting to cancel the error caused by the absence of negative samples. We further show that the use of the penalized -distance gives a computationally efficient algorithm with an analytic solution. The consistency, stability, and estimation error are theoretically analyzed. Finally, we experimentally demonstrate the usefulness of the proposed method.
Recommendations
- scientific article; zbMATH DE number 2080441
- Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
- Learning from positive and unlabeled examples
- Learning from positive and unlabeled data: a survey
- Information-theoretic representation learning for positive-unlabeled classification
Cites work
- scientific article; zbMATH DE number 4170917 (Why is no real title available?)
- scientific article; zbMATH DE number 3252891 (Why is no real title available?)
- scientific article; zbMATH DE number 3322635 (Why is no real title available?)
- A Neyman–Pearson Approach to Statistical Learning
- A least-squares approach to direct importance estimation
- Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure
- Dual representation of \(\phi\)-divergences and applications.
- On Information and Sufficiency
- Optimization Problems with Perturbations: A Guided Tour
- Perturbed Optimization in Banach Spaces I: A General Theory Based on a Weak Directional Constraint Qualification
- Semi-supervised novelty detection
Cited in
(18)- Joint feature selection and classification for positive unlabelled multi-label data using weighted penalized empirical risk minimization
- Estimating the class prior for positive and unlabelled data via logistic regression
- A two-step anomaly detection based method for PU classification in imbalanced data sets
- A boosting framework for positive-unlabeled learning
- Estimating labels from label proportions
- Positive and unlabeled relational classification through label frequency estimation
- Information-theoretic representation learning for positive-unlabeled classification
- Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
- Classification from only positive and unlabeled functional data
- Learning from positive and unlabeled examples
- Positive-unlabeled classification under class-prior shift: a prior-invariant approach based on density ratio estimation
- Triply stochastic gradient method for large-scale nonlinear similar unlabeled classification
- scientific article; zbMATH DE number 2080441 (Why is no real title available?)
- Learning from positive and unlabeled data: a survey
- Semi-supervised AUC optimization based on positive-unlabeled learning
- On the stopping criteria for \(k\)-nearest neighbor in positive unlabeled time series classification problems
- A graph-based approach for positive and unlabeled learning
- Bayesian logistic model for positive and unlabeled data
This page was built for publication: Class-prior estimation for learning from positive and unlabeled data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2398088)