Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment
From MaRDI portal
Publication:2127250
DOI10.1007/S10994-021-06090-8OpenAlexW3215004982MaRDI QIDQ2127250FDOQ2127250
Authors: Yanyan Li
Publication date: 20 April 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.10709
inductive logic programmingbackground knowledgegraph neural networksmode-directed inverse entailmentneuro-symbolic learning
Cites Work
- kLog: a language for logical and relational learning with kernels
- Title not available (Why is that?)
- Graph kernels
- Title not available (Why is that?)
- Inductive Logic Programming: Theory and methods
- Lifted relational neural networks: efficient learning of latent relational structures
- What kinds of relational features are useful for statistical learning?
- Parameter screening and optimisation for ILP using designed experiments
- Incorporating symbolic domain knowledge into graph neural networks
- Beyond graph neural networks with lifted relational neural networks
- Graph Representation Learning
- 10.1162/153244304773633861
- Representation Learning
- Inductive Logic Programming
Uses Software
This page was built for publication: Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2127250)