Analysis of (sub-)Riemannian PDE-G-CNNs

From MaRDI portal
Publication:6186783

DOI10.1007/S10851-023-01147-WarXiv2210.00935OpenAlexW4366003524MaRDI QIDQ6186783FDOQ6186783


Authors: Gijs Bellaard, Daan L. J. Bon, Gautam Pai, Bart M. N. Smets, Remco Duits Edit this on Wikidata


Publication date: 10 January 2024

Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)

Abstract: Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalises G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously 1) reduce network complexity, 2) increase classification performance, and 3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.


Full work available at URL: https://arxiv.org/abs/2210.00935




Recommendations




Cites Work


Cited In (1)





This page was built for publication: Analysis of (sub-)Riemannian PDE-G-CNNs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6186783)