Bayesian inference of dependent kappa for binary ratings

Document Type

Article

Publication Date

11-20-2021

Publication Title

Statistics in medicine

Abstract

In medical and social science research, reliability of testing methods measured through inter- and intraobserver agreement is critical in disease diagnosis. Often comparison of agreement across multiple testing methods is sought in situations where testing is carried out on the same experimental units rendering the outcomes to be correlated. In this article, we first developed a Bayesian method for comparing dependent agreement measures under a grouped data setting. Simulation studies showed that the proposed methodology outperforms the competing methods in terms of power, while maintaining a decent type I error rate. We further developed a Bayesian joint model for comparing dependent agreement measures adjusting for subject and rater-level heterogeneity. Simulation studies indicate that our model outperforms a competing method that is used in this context. The developed methodology was implemented on a key measure on a dichotomous rating scale from a study with six raters evaluating three classification methods for chest radiographs for pneumoconiosis developed by the International Labor Office.

PubMed ID

34542193

ePublication

ePub ahead of print

Volume

40

Issue

26

First Page

5947

Last Page

5960

Share

COinS