Refining neural network predictions using background knowledge

From MaRDI portal
Publication:6176232

DOI10.1007/S10994-023-06310-3zbMATH Open1518.68295arXiv2206.04976MaRDI QIDQ6176232FDOQ6176232


Authors: Alessandro Daniele, Emile van Krieken, Luciano Serafini, Frank van Harmelen Edit this on Wikidata


Publication date: 22 August 2023

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Recent work has shown logical background knowledge can be used in learning systems to compensate for a lack of labeled training data. Many methods work by creating a loss function that encodes this knowledge. However, often the logic is discarded after training, even if it is still useful at test time. Instead, we ensure neural network predictions satisfy the knowledge by refining the predictions with an extra computation step. We introduce differentiable refinement functions that find a corrected prediction close to the original prediction. We study how to effectively and efficiently compute these refinement functions. Using a new algorithm called Iterative Local Refinement (ILR), we combine refinement functions to find refined predictions for logical formulas of any complexity. ILR finds refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not. Finally, ILR produces competitive results in the MNIST addition task.


Full work available at URL: https://arxiv.org/abs/2206.04976




Recommendations




Cites Work






This page was built for publication: Refining neural network predictions using background knowledge

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6176232)