Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection (Q6717042)

From MaRDI portal





Dataset published at Zenodo repository.
Language Label Description Also known as
default for all languages
No label defined
    English
    Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection
    Dataset published at Zenodo repository.

      Statements

      0 references
      Full-Text PDF Title:Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection Authors: Jan Philip Wahle, Terry Ruas, Norman Meuschke, and Bela Gipp Contact email: wahle@uni-wuppertal.de; ruas@uni-wuppertal.de Venue: JCDL Year: 2021 ================================================================ Dataset Description: Training: 1,474,230 alignedparagraphs (98,282 original, 1,375,948 paraphrased with 3 models and 5 hyperparameter configurations each98,282) extracted from4,012 (English) Wikipedia articles. Testing: BERT-large (cased): arXiv - Original - 20,966; Paraphrased -20,966; Theses - Original - 5,226; Paraphrased -5,226; Wikipedia - Original - 39,241; Paraphrased -39,241; RoBERTa-large (cased): arXiv - Original - 20,966; Paraphrased -20,966; Theses - Original - 5,226; Paraphrased -5,226; Wikipedia - Original - 39,241; Paraphrased -39,241; Longformer-large (uncased): arXiv - Original - 20,966; Paraphrased -20,966; Theses - Original - 5,226; Paraphrased -5,226; Wikipedia - Original - 39,241; Paraphrased -39,241; ================================================================ Dataset Structure: [og]folder: original. The original documents are split by the data source with the following folders: [arxiv] [thesis] [wikipedia] [wikipedia_train] [`model_name`_mlm_prob_`probability`] (e.g., bert-large-cased_mlm_prob_0.15):contains all paraphrased examples using the model with name `model_name` and Masked Language Modeling probability `probability`. Each paraphrase model/probabilityfolder contains the corresponding paraphrased documents according to [of]: [arxiv] [thesis] [wikipedia] [wikipedia_train] hparams.yml hparams.yml contains the hyperparameters to reconstruct the dataset using the official repository. ================================================================ Files: On the lowest folder level, each `.txt` file contains exactly one paragraph. The filename contains either ORIG for original, or SPUN for paraphrased. ================================================================ Code: To avoid misuse of the code for constructing machine-paraphrased plagiarism, you must consent to our Terms and Conditionsand send the signed version via mail to one of the contact addresses above to obtain access to our repository (see TermsAndConditions.pdf).
      0 references
      19 March 2021
      0 references
      0 references
      0 references
      0 references
      0 references
      1.0
      0 references

      Identifiers

      0 references