Sen A, Li P, Ye W, and Franzblau A. Bayesian inference of dependent kappa for binary ratings. Stat Med 2021.
Statistics in medicine
In medical and social science research, reliability of testing methods measured through inter- and intraobserver agreement is critical in disease diagnosis. Often comparison of agreement across multiple testing methods is sought in situations where testing is carried out on the same experimental units rendering the outcomes to be correlated. In this article, we first developed a Bayesian method for comparing dependent agreement measures under a grouped data setting. Simulation studies showed that the proposed methodology outperforms the competing methods in terms of power, while maintaining a decent type I error rate. We further developed a Bayesian joint model for comparing dependent agreement measures adjusting for subject and rater-level heterogeneity. Simulation studies indicate that our model outperforms a competing method that is used in this context. The developed methodology was implemented on a key measure on a dichotomous rating scale from a study with six raters evaluating three classification methods for chest radiographs for pneumoconiosis developed by the International Labor Office.
ePub ahead of print