Guaranteed Classification via Regularized Similarity Learning
From MaRDI portal
Publication:5378332
DOI10.1162/NECO_A_00556zbMATH Open1410.68316arXiv1306.3108OpenAlexW2129127886WikidataQ45033678 ScholiaQ45033678MaRDI QIDQ5378332FDOQ5378332
Authors: Zheng-Chu Guo, Yiming Ying
Publication date: 12 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Abstract: Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches for similarity metric learning have been proposed, there is little theoretical study on the links between similarity met- ric learning and the classification performance of the result classifier. In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds. We show that the generalization error of the resulting linear separator can be bounded by the derived generalization bound of similarity learning. This shows that a good gen- eralization of the learnt similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet at al. [3]. Due to the techniques dependent on the notion of uniform stability [6], the bound obtained there holds true only for the Frobenius matrix- norm regularization. Our techniques using the Rademacher complexity [5] and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix-norms including sparse L 1 -norm and mixed (2,1)-norm.
Full work available at URL: https://arxiv.org/abs/1306.3108
Recommendations
- Discriminatively regularized least-squares classification
- Classifier learning with a new locality regularization method
- Classification from pairwise similarities/dissimilarities and unlabeled data via empirical risk minimization
- Classification with guaranteed probability of error
- Learning similarity with operator-valued large-margin classifiers
- Large margin classification with indefinite similarities
- Generative models for similarity-based classification
- Semi-supervised learning with regularized Laplacian
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Title not available (Why is that?)
- 10.1162/153244302760200704
- Title not available (Why is that?)
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Ranking and empirical minimization of \(U\)-statistics
- Generalization bounds for metric and similarity learning
- Similarity-based classification: concepts and algorithms
- Large scale online learning of image similarity through ranking
- Title not available (Why is that?)
- Learning similarity with operator-valued large-margin classifiers
- 10.1162/153244303321897690
- Learning the kernel matrix with semidefinite programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Title not available (Why is that?)
- Regularization networks with indefinite kernels
Cited In (4)
This page was built for publication: Guaranteed Classification via Regularized Similarity Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5378332)