Analysis of Nonagreements among Multiple Raters
From MaRDI portal
Publication:3673914
Cited in
(5)- Confidence intervals for the interrater agreement measure kappa
- Testing observer uncertainty in a nominal-scale agreement analysis
- Assessing the reliability of ordered categorical scales using kappa-type statistics
- Fixed-effects modeling of Cohen's kappa for bivariate multinomial data
- Cohen's kappa can always be increased and decreased by combining categories
This page was built for publication: Analysis of Nonagreements among Multiple Raters
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3673914)