Guaranteed Classification via Regularized Similarity Learning
From MaRDI portal
Publication:5378332
Abstract: Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches for similarity metric learning have been proposed, there is little theoretical study on the links between similarity met- ric learning and the classification performance of the result classifier. In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds. We show that the generalization error of the resulting linear separator can be bounded by the derived generalization bound of similarity learning. This shows that a good gen- eralization of the learnt similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet at al. [3]. Due to the techniques dependent on the notion of uniform stability [6], the bound obtained there holds true only for the Frobenius matrix- norm regularization. Our techniques using the Rademacher complexity [5] and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix-norms including sparse L 1 -norm and mixed (2,1)-norm.
Recommendations
- Discriminatively regularized least-squares classification
- Classifier learning with a new locality regularization method
- Classification from pairwise similarities/dissimilarities and unlabeled data via empirical risk minimization
- Classification with guaranteed probability of error
- Learning similarity with operator-valued large-margin classifiers
- Large margin classification with indefinite similarities
- Generative models for similarity-based classification
- Semi-supervised learning with regularized Laplacian
Cites work
- scientific article; zbMATH DE number 5957283 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 192703 (Why is no real title available?)
- scientific article; zbMATH DE number 1254560 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- 10.1162/153244302760200704
- 10.1162/153244303321897690
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Generalization bounds for metric and similarity learning
- Large scale online learning of image similarity through ranking
- Learning similarity with operator-valued large-margin classifiers
- Learning the kernel matrix with semidefinite programming
- Ranking and empirical minimization of \(U\)-statistics
- Regularization networks with indefinite kernels
- Regularization techniques for learning with matrices
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Similarity-based classification: concepts and algorithms
Cited in
(8)- Functional analysis techniques to improve similarity matrices in discrimination problems
- Online regularized learning with pairwise loss functions
- Good edit similarity learning by loss minimization
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Generalization bounds for metric and similarity learning
- Indefinite proximity learning: a review
- Generalization analysis of multi-modal metric learning
- How Good Is a Kernel When Used as a Similarity Measure?
This page was built for publication: Guaranteed Classification via Regularized Similarity Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5378332)