Neyman-Pearson classification: parametrics and sample size requirement
From MaRDI portal
Publication:4969044
Abstract: The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level . This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error upper bound with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class , which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers.
Recommendations
Cites work
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- A Neyman–Pearson Approach to Statistical Learning
- A direct approach to sparse discriminant analysis in ultra-high dimensions
- A direct estimation approach to sparse linear discriminant analysis
- A plug-in approach to Neyman-Pearson classification
- A road to classification in high dimensional space: the regularized optimal affine discriminant
- A tail inequality for quadratic forms of subgaussian random vectors
- Analysis to Neyman-Pearson classification with convex loss function
- Isotropic local laws for sample covariance and generalized Wigner matrices
- Measuring mass concentrations and estimating density contour clusters -- An excess mass approach
- Neyman-Pearson classification, convexity and stochastic constraints
- Penalized classification using Fisher's linear discriminant
- Random forests
- Regularized linear discriminant analysis and its application in microarrays
- Smooth discrimination analysis
- Sparse linear discriminant analysis by thresholding for high dimensional data
- Variance and covariance inequalities for truncated joint normal distribution via monotone likelihood ratio and log-concavity
Cited in
(6)- nproc
- Neyman-Pearson classification under high-dimensional settings
- Asymmetric Error Control Under Imperfect Supervision: A Label-Noise-Adjusted Neyman–Pearson Umbrella Algorithm
- scientific article; zbMATH DE number 7370641 (Why is no real title available?)
- A plug-in approach to Neyman-Pearson classification
- Model-averaging-based semiparametric modeling for conditional quantile prediction
This page was built for publication: Neyman-Pearson classification: parametrics and sample size requirement
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969044)