Measuring Agreement for Multinomial Data
From MaRDI portal
Publication:3966937
DOI10.2307/2529886zbMath0501.62045MaRDI QIDQ3966937
Publication date: 1982
Published in: Biometrics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2529886
reliability; multinomial data; complete independence; asymptotic standard error; two-way layout; measure of agreement; intraclass correlation coefficients; kappa-like statistic
62P10: Applications of statistics to biology and medical sciences; meta analysis
62H20: Measures of association (correlation, canonical correlation, etc.)
62P15: Applications of statistics to psychology
Related Items
Confidence intervals for the interrater agreement measure kappa, Weighted kappa as a function of unweighted kappas, Statistical inference of agreement coefficient between two raters with binary outcomes, Statistical description of interrater variability in ordinal ratings, Sklar's Omega: A Gaussian Copula-Based Framework for Assessing Agreement, Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa, Bayesian testing of agreement criteria under order constraints, On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores, A comparison of reliability coefficients for ordinal rating scales, Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas, A family of multi-rater kappas that can always be increased and decreased by combining categories, Equivalences of weighted kappas for multiple raters, Modeling the agreement of discrete bivariate survival times using kappa coefficient, Inequalities between multi-rater kappas, Assessing agreement with multiple raters on correlated kappa statistics, Beyond kappa: A review of interrater agreement measures