One-pass AUC optimization
From MaRDI portal
Publication:286076
DOI10.1016/J.ARTINT.2016.03.003zbMATH Open1357.68168arXiv1305.1363OpenAlexW2299684943MaRDI QIDQ286076FDOQ286076
Rong Jin, Lu Wang, Wei Gao, Zhi-Hua Zhou, Shenghuo Zhu
Publication date: 19 May 2016
Published in: Artificial Intelligence (Search for Journal in Brave)
Abstract: AUC is an important performance measure and many algorithms have been devoted to AUC optimization, mostly by minimizing a surrogate convex loss on a training data set. In this work, we focus on one-pass AUC optimization that requires only going through the training data once without storing the entire training dataset, where conventional online learning algorithms cannot be applied directly because AUC is measured by a sum of losses defined over pairs of instances from different classes. We develop a regression-based algorithm which only needs to maintain the first and second order statistics of training data in memory, resulting a storage requirement independent from the size of training data. To efficiently handle high dimensional data, we develop a randomized algorithm that approximates the covariance matrices by low rank matrices. We verify, both theoretically and empirically, the effectiveness of the proposed algorithm.
Full work available at URL: https://arxiv.org/abs/1305.1363
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Pegasos: primal estimated sub-gradient solver for SVM
- Nonparametric and semiparametric estimation of the receiver operating characteristic curve
- Prediction, Learning, and Games
- Probability Inequalities for Sums of Bounded Random Variables
- Measuring classifier performance: a coherent alternative to the area under the ROC curve
- Robust classification for imprecise environments
- Ranking and empirical minimization of \(U\)-statistics
- Weighted sums of certain dependent random variables
- Generalization bounds for ranking algorithms via algorithmic stability
- Margin-based ranking and an equivalence between AdaBoost and RankBoost
- 10.1162/1532443041827916
- Logarithmic Regret Algorithms for Online Convex Optimization
- Learning Theory
Cited In (6)
- Learning with mitigating random consistency from the accuracy measure
- Semi-supervised AUC optimization based on positive-unlabeled learning
- Optimizing area under the ROC curve using semi-supervised learning
- Stability and optimization error of stochastic gradient descent for pairwise learning
- Stochastic AUC optimization with general loss
- Title not available (Why is that?)
Uses Software
This page was built for publication: One-pass AUC optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q286076)