Differentiating poor validity from probable impairment on the medical symptom validity test: a cross-validation study

Document Type

Article

Publication Date

3-2019

Publication Title

The International journal of neuroscience

Abstract

AIMS: In neuropsychological evaluations, it is often difficult to ascertain whether poor performance on measures of validity is due to poor effort or malingering, or whether there is genuine cognitive impairment. Dunham and Denney created an algorithm to assess this question using the Medical Symptom Validity Test (MSVT). We assessed the ability of their algorithm to detect poor validity versus probable impairment, and concordance of failure on the MSVT with other freestanding tests of performance validity.

METHODS: Two previously published datasets (n = 153 and n = 641, respectively) from outpatient neuropsychological evaluations were used to test Dunham and Denney's algorithm, and to assess concordance of failure rates with the Test of Memory Malingering and the forced choice measure of the California Verbal Learning Test, two commonly used performance validity tests.

RESULTS: In both datasets, none of the four cutoff scores for failure on the MSVT (70%, 75%, 80%, or 85%) identified a poor validity group with proportionally aligned failure rates on other freestanding measures of performance validity. Additionally, the protocols with probable impairment did not differ from those with poor validity on cognitive measures.

CONCLUSIONS: Despite what appeared to be a promising approach to evaluating failure on the easy MSVT subtests when clinical data are unavailable (as recommended in the advanced interpretation program, or advanced interpretation [AI], of the MSVT), the current findings indicate the AI remains the gold standard for doing so. Future research should build on this effort to address shortcomings in measures of effort in neuropsychological evaluations.

PubMed ID

30234402

Volume

129

Issue

3

First Page

217

Last Page

224

Share

COinS