Statistical inference of agreement coefficient between two raters with binary outcomes
From MaRDI portal
Publication:5077201
DOI10.1080/03610926.2019.1576894OpenAlexW2918131037WikidataQ128352998 ScholiaQ128352998MaRDI QIDQ5077201FDOQ5077201
Authors: Tetsuji Ohyama
Publication date: 18 May 2022
Published in: Communications in Statistics: Theory and Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/03610926.2019.1576894
Recommendations
- On population‐based measures of agreement for binary classifications
- Measuring inter-rater agreement: how useful is the kappa statistic
- Agreement between two independent groups of raters
- Assessing interrater agreement on binary measurements via intraclass odds ratio
- Cohen's Kappa Statistic: A Critical Appraisal and Some Modifications
Cites Work
- Measuring Agreement for Multinomial Data
- Title not available (Why is that?)
- The Measurement of Observer Agreement for Categorical Data
- Homogeneity Score Test for the Intraclass Version of the Kappa Statistics and Sample‐Size Determination in Multiple or Stratified Studies
- Extension of the Kappa Coefficient
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- 2 x 2 Kappa Coefficients: Measures of Agreement or Association
- Testing the Homogeneity of Kappa Statistics
- Confidence Interval Estimation of the Intraclass Correlation Coefficient for Binary Outcome Data
- Measurement of Interrater Agreement with Adjustment for Covariates
- Weighted least-squares approach for comparing correlated kappa
- Interval Estimation for a Difference Between Intraclass Kappa Statistics
Cited In (4)
- Justification for the use of Cohen's kappa statistic in experimental studies of NLP and text mining
- Statistical methods to check agreement between two coding systems in the absence of double-coded data
- A BAYESIAN ANALYSIS FOR INTER-RATER AGREEMENT
- Statistical inference of Gwet’s AC1 coefficient for multiple raters and binary outcomes
This page was built for publication: Statistical inference of agreement coefficient between two raters with binary outcomes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5077201)