Reliability of Identifying Diagnostic Error and Delay in Critically Ill Patients

Document Type

Conference Proceeding

Publication Date

2018

Publication Title

Crit Care Med

Abstract

Learning Objectives: Autopsy studies have attributed 10-20% of deaths to diagnostic error. Diagnostic error and delay contribute to avoidable illness. The process of reporting errors and near misses, however, remains under developed and lacks a standardized measurement tool. The aim of this study was to develop a reliable, reproducible standard operating procedure (SOP) for data abstraction, using available information in the electronic medical record that identifies diagnostic errors and delays in adult nontrauma patients at risk for critical illness.

Methods: This was a retrospective observational study at Mayo Clinic Rochester reviewing a convenience sample of adult nontrauma patients admitted to the hospital in the year 2012 who had a rapid response team (RRT) call during their hospitalization. A standard operating procedure was developed to review electronic medical records using a taxonomy based assessment of diagnostic error and delay that identified areas of where in the diagnostic process the error or delay occurred and what that error or delay was. Diagnostic errors were further classified using a modified Goldman classification. Two critical care fellows independently reviewed all patients in the sample. Senior Critical Care clinicians further arbitrated disagreement among the reviewers. Inter-rater reliability was assessed with kappa agreement statistics.

Results: A total of 1300 patients had a RRT call in 2012 of which our convenience sample was 130 patients (10%). Following independent review by critical fellows and arbitration of the disagreements, diagnostic error was identified in 10% and diagnostic delay in 17%. When assessing diagnostic error or delay, Kappa Coefficient was 0.57 (95 % CI 0.40-0.74) and observed agreement was 87% (95% CI 0.80-0.91) between the reviewers. The calculated Kappa Coefficient for diagnostic error was 0.48 (95% CI, 0.22-0.74) and observed agreement 91.5% (95% CI 0.84-0.95). For diagnostic delay, the calculated Kappa Coefficient was 0.61 (95% CI, 0.43-0.78) and observed agreement 88.5 % (95% CI 0.81-0.92).

Conclusions: In this study, we have developed a standard operating procedure that identifies with moderate reliability, diagnostic error and delay in adult non-trauma critically ill patients.

Volume

46

First Page

613

This document is currently not available here.

Share

COinS