Generalization bounds for metric and similarity learning

From MaRDI portal
Publication:255367

DOI10.1007/S10994-015-5499-7zbMATH Open1345.68250arXiv1207.5437OpenAlexW1490654344MaRDI QIDQ255367FDOQ255367

Qiong Cao, Zheng-Chu Guo, Yiming Ying

Publication date: 9 March 2016

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Recently, metric learning and similarity learning have attracted a large amount of interest. Many models and optimisation algorithms have been proposed. However, there is relatively little work on the generalization analysis of such methods. In this paper, we derive novel generalization bounds of metric and similarity learning. In particular, we first show that the generalization analysis reduces to the estimation of the Rademacher average over "sums-of-i.i.d." sample-blocks related to the specific matrix norm. Then, we derive generalization bounds for metric/similarity learning with different matrix-norm regularisers by estimating their specific Rademacher complexities. Our analysis indicates that sparse metric/similarity learning with L1-norm regularisation could lead to significantly better bounds than those with Frobenius-norm regularisation. Our novel generalization analysis develops and refines the techniques of U-statistics and Rademacher complexity analysis.


Full work available at URL: https://arxiv.org/abs/1207.5437





Cites Work


Cited In (22)






This page was built for publication: Generalization bounds for metric and similarity learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q255367)