Iterative SE(3)-transformers
From MaRDI portal
Publication:2117906
DOI10.1007/978-3-030-80209-7_63zbMATH Open1486.92140arXiv2102.13419OpenAlexW3183520159MaRDI QIDQ2117906FDOQ2117906
Authors: Fabian Fuchs, Edward Wagstaff, Justas Dauparas, I. Posner
Publication date: 22 March 2022
Abstract: When manipulating three-dimensional data, it is possible to ensure that rotational and translational symmetries are respected by applying so-called SE(3)-equivariant models. Protein structure prediction is a prominent example of a task which displays these symmetries. Recent work in this area has successfully made use of an SE(3)-equivariant model, applying an iterative SE(3)-equivariant attention mechanism. Motivated by this application, we implement an iterative version of the SE(3)-Transformer, an SE(3)-equivariant attention-based model for graph data. We address the additional complications which arise when applying the SE(3)-Transformer in an iterative fashion, compare the iterative and single-pass versions on a toy problem, and consider why an iterative model may be beneficial in some problem settings. We make the code for our implementation available to the community.
Full work available at URL: https://arxiv.org/abs/2102.13419
Recommendations
- What is... an Equivariant Neural Network?
- \(\mathrm{SU}(1,1)\) equivariant neural networks and application to robust Toeplitz Hermitian positive definite matrix classification
- Using a graph transformer network to predict 3D coordinates of proteins via geometric algebra modelling
- Universal approximations of invariant maps by neural networks
- Probabilistic symmetries and invariant neural networks
Uses Software
This page was built for publication: Iterative SE(3)-transformers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117906)