Maximum Likelihood Estimation of Agreement in the Constant Predictive Probability Model, and Its Relation to Cohen's Kappa
From MaRDI portal
DOI10.2307/2531434zbMATH Open0715.62047OpenAlexW2046280714WikidataQ68840017 ScholiaQ68840017MaRDI QIDQ3201289FDOQ3201289
Authors: Aickin, Mikel
Publication date: 1990
Published in: Biometrics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2531434
Recommendations
- On population‐based measures of agreement for binary classifications
- scientific article; zbMATH DE number 1437456
- Measuring inter-rater agreement: how useful is the kappa statistic
- The kappa measure of agreement between two groups of observers.
- Interval Estimation of the Kappa Coefficient with Binary Classification and an Equal Marginal Probability Model
Point estimation (62F10) Applications of statistics to biology and medical sciences; meta analysis (62P10)
Cited In (6)
- A formal proof of a paradox associated with Cohen's kappa
- Conditional inference for subject‐specific and marginal agreement: Two families of agreement measures
- Measuring agreement using guessing models and knowledge coefficients
- Beyond kappa: A review of interrater agreement measures
- Fixed-effects modeling of Cohen's kappa for bivariate multinomial data
- Weighted inter-rater agreement measures for ordinal outcomes
This page was built for publication: Maximum Likelihood Estimation of Agreement in the Constant Predictive Probability Model, and Its Relation to Cohen's Kappa
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3201289)