Measuring pairwise interobserver agreement when all subjects are judged by the same observers
DOI10.1111/J.1467-9574.1982.TB00774.XzbMATH Open0499.62095OpenAlexW2066803537MaRDI QIDQ135073FDOQ135073
Authors: H. J. A. Schouten, H. J. A. Schouten
Publication date: June 1982
Published in: Statistica Neerlandica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1111/j.1467-9574.1982.tb00774.x
missing dataclinical diagnosisdegree of agreementhierarchical clusteringlinearized Taylor series expansionweighted kappa coefficients
Multivariate analysis (62H99) Applications of statistics to biology and medical sciences; meta analysis (62P10) Measures of association (correlation, canonical correlation, etc.) (62H20)
Cites Work
Cited In (6)
- Agreement between an isolated rater and a group of raters
- Assessing the reliability of ordered categorical scales using kappa-type statistics
- Statistical description of interrater variability in ordinal ratings
- Agreement between two independent groups of raters
- A new approach to inter-rater agreement through stochastic orderings: the discrete case
- A paired kappa to compare binary ratings across two medical tests
This page was built for publication: Measuring pairwise interobserver agreement when all subjects are judged by the same observers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q135073)