oolong (Q5975966): Difference between revisions
From MaRDI portal
Removed claims |
Added link to MaRDI item. |
||
(2 intermediate revisions by 2 users not shown) | |||
Property / programmed in | |||
Property / programmed in: R / rank | |||
Normal rank | |||
Property / depends on software | |||
Property / depends on software: R / rank | |||
Normal rank | |||
Property / depends on software: R / qualifier | |||
software version identifier: ≥ 3.5.0 | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI software profile / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 19:09, 12 March 2024
Create Validation Tests for Automated Content Analysis
Language | Label | Description | Also known as |
---|---|---|---|
English | oolong |
Create Validation Tests for Automated Content Analysis |
Statements
Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) <doi:10.1017/pan.2021.33> tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.
0 references
10 February 2024
0 references
expanded from: LGPL (≥ 2.1) (English)
0 references