Can You Believe Your Troponin? Using Laboratory Automation and Programmed Rules to Reduce Non-Repeatable False Positive Troponin Results
Recommended Citation
Cook B, Jones S, Ellacott T. Can You Believe Your Troponin? Using Laboratory Automation and Programmed Rules to Reduce Non-Repeatable False Positive Troponin Results. Clin Chem 2024; 70(Supplement_1):I6.
Document Type
Conference Proceeding
Publication Date
10-1-2024
Publication Title
Clin Chem
Abstract
BACKGROUND: Cardiac troponin is currently the standard biomarker for evaluating patients with suspected acute coronary syndrome. Like all immunoassays, troponin testing is susceptible to non-repeatable positive results. Because this can cause a clinical challenge, we assessed our false positive rate by manipulating the timing and conditions of repeat troponin testing. METHODS: Aprospective observational study established the repeatability failure (RF) rate in three scenarios, where a suspicious initial result prompts a repeat test. Protocol 1: We programed our middleware (Remisol, Beckman Coulter) to route the specimen to online storage after initial testing. After a 120-minute delay, the specimen was automatically retrieved from storage and tested again. Protocol 2: We programmed DxI 800 systems (Beckman Coulter) to aspirate a specimen reserve volume sufficient for two test replicates simultaneously. Protocol 3: We restricted repeat testing to 18-50 ng/L where the majority of RF occurred. To analyze RF in 4 and 18 ng/L interval (limit of reporting and URL, respectively), an absolute 3.5 ng/L bias was considered an RF. For specimens >18 ng/L, a difference of ± 4.0 SD difference between replicates with a CV of 7% was a RF, equating to a relative difference of 28% (Method 1). Method 2 used a relative difference of 35% for RF. Method 3 defined RF as a critical difference of z × √ 2 × SD analytical where z=3.5 and SD=1.8 for results <19 ng/L and SD for results ≥19 ng/L were calculated from a fixed 7% CV for each result. A z-value of 3.5 represents a probability of 0.0005 (5 in 10,000). RESULTS: Using Protocol 1, RF rate was significantly higher in delayed repeat testing (1.6%, CI 1.5-1.8, n=23,525) compared to immediate repeat testing (0.4%, CI 0.4-0.5, n=14,880, p<0.0001) in two-tailed analysis. Because the first result was more commonly larger than the second (77.7% v. 52.2%, respectively), one-tailed analysis was used and found RF in delayed repeat testing was 0.9% (CI 0.8-1.0) v. immediate repeat testing (0.4%, CI 0.3-0.4, p<0.0001). Asimilar percentage of results were found in the 18-50 ng/L range in delayed testing (88.6%) v. immediate repeat (86.8%). Protocol 2 (with one-tailed analysis) yielded similar RF rates: 0.8%, CI 0.8-1.0 v. 0.3%, CI 0.3-0.4, delayed and immediate repeat testing, respectively. Protocol 3 yielded RF rate of 0.6%, CI 0.5-0.7 v. 0.2%, CI 0.2-0.3, delayed and immediate repeat testing, respectively, with all p-values <0.0001. CONCLUSIONS: By programming auto-repeating hsTnI testing only in the 18-50 ng/L range, we found >80% of specimens with potential RF results could be retested using either the delayed- or immediate-repeat protocols. This suggests automated protocols could repeat testing with potential RFs without staff intervention or increasing container movement on the automation line. Reagent usage because of repeat testing could be conserved by limiting repeat testing to specimens most likely to create provider confusion. Rules to automate selected retesting of hsTnI in testing systems and/or middleware can enhance quality of results without human intervention and subsequent burden on staff, reduce reagent waste and enhance provider decision making in a critical population.
Volume
70
Issue
Supplement_1
First Page
I6