oolong
Create Validation Tests for Automated Content Analysis
Marius Sältzer, Chung-Hong Chan
Last update: 10 February 2024
Copyright license: GNU Lesser General Public License, version 2.1, GNU Lesser General Public License, version 3.0
Software version identifier: 0.3.4, 0.3.11, 0.4.0, 0.4.1, 0.4.3, 0.5.0, 0.6.0
Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) <doi:10.1017/pan.2021.33> tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.
This page was built for software: oolong