Evaluating Prompt-based Question Answering for Object Prediction in the Open Research Knowledge Graph

From MaRDI portal
Publication:6437460

arXiv2305.12900MaRDI QIDQ6437460FDOQ6437460


Authors: Jennifer D'Souza, Moussab Hrou, Sören Auer Edit this on Wikidata


Publication date: 22 May 2023

Abstract: There have been many recent investigations into prompt-based training of transformer language models for new text genres in low-resource settings. The prompt-based training approach has been found to be effective in generalizing pre-trained or fine-tuned models for transfer to resource-scarce settings. This work, for the first time, reports results on adopting prompt-based training of transformers for extit{scholarly knowledge graph object prediction}. The work is unique in the following two main aspects. 1) It deviates from the other works proposing entity and relation extraction pipelines for predicting objects of a scholarly knowledge graph. 2) While other works have tested the method on text genera relatively close to the general knowledge domain, we test the method for a significantly different domain, i.e. scholarly knowledge, in turn testing the linguistic, probabilistic, and factual generalizability of these large-scale transformer models. We find that (i) per expectations, transformer models when tested out-of-the-box underperform on a new domain of data, (ii) prompt-based training of the models achieve performance boosts of up to 40% in a relaxed evaluation setting, and (iii) testing the models on a starkly different domain even with a clever training objective in a low resource setting makes evident the domain knowledge capture gap offering an empirically-verified incentive for investing more attention and resources to the scholarly domain in the context of transformer models.




Has companion code repository: https://github.com/as18cia/thesis_work









This page was built for publication: Evaluating Prompt-based Question Answering for Object Prediction in the Open Research Knowledge Graph

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6437460)