Expanding the family of Grassmannian kernels: an embedding perspective

From MaRDI portal
Publication:5264263

DOI10.1007/978-3-319-10584-0_27zbMATH Open1376.94021arXiv1407.1123OpenAlexW2228536296MaRDI QIDQ5264263FDOQ5264263


Authors: Mehrtash Harandi, Mathieu Salzmann, Sadeep Jayasumana, Hongdong Li, Richard I. Hartley Edit this on Wikidata


Publication date: 24 July 2015

Published in: Computer Vision – ECCV 2014 (Search for Journal in Brave)

Abstract: Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks. However, it also incurs challenges arising from the fact that linear subspaces do not obey Euclidean geometry, but lie on a special type of Riemannian manifolds known as Grassmannian. To leverage the techniques developed for Euclidean spaces (e.g, support vector machines) with subspaces, several recent studies have proposed to embed the Grassmannian into a Hilbert space by making use of a positive definite kernel. Unfortunately, only two Grassmannian kernels are known, none of which -as we will show- is universal, which limits their ability to approximate a target function arbitrarily well. Here, we introduce several positive definite Grassmannian kernels, including universal ones, and demonstrate their superiority over previously-known kernels in various tasks, such as classification, clustering, sparse coding and hashing.


Full work available at URL: https://arxiv.org/abs/1407.1123




Recommendations





Cited In (7)





This page was built for publication: Expanding the family of Grassmannian kernels: an embedding perspective

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5264263)