Responses
Other responses
Jump to comment:
- Published on: 14 September 2020
- Published on: 14 September 2020
- Published on: 14 August 2020
- Published on: 14 August 2020
- Published on: 14 September 2020Response to Larrabee et al
We read Larrabee and colleagues’ e-letter response to our systematic review on Performance Validity Testing (PVT). Whilst we welcome debate, and we recognize that some clinicians will disagree with our conclusions, we were disappointed that they misrepresented our paper in formulating their response:
1. The authors state “Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States..”. In reality we used the term “effort test” only twice in our paper; in the introduction: “(PVTs), also historically called effort tests” and once in the methods in describing our search terms. By contrast we use the term PVT on 45 occasions.
2. We are concerned that they then go on to misrepresent the results of our review. We found a wide variation in results in different clinical groups and in different tests. We noted that failure rates for some groups and some tests exceeds 25%. We did not conclude that all failure rates were as high as this, but rather that failing a PVT was not a rare phenomenon but was reasonably common in a range of clinical groups.
We presented results to support our conclusion that the PVT literature is problematic with regards to blinding to diagnosis and potential for selection bias.
We also uphold our speculation that an alternate explanation for failure on forced choice tests at above chance cutoffs may result from attentional deficit related to other symptoms. W...
Show MoreConflict of Interest:
None declared. - Published on: 14 September 2020Letter: Response to McWhirter et al (2020)
McWhirter et al. (2020) reviewed the published literature on Performance Validity Tests (PVTs), concluding that high false positive (Fp) rates were common in clinical (non-forensic) samples, exceeding 25%. In their discussion, they stated: “The poor quality of the PVT evidence base examined here, with a lack of blinding to diagnosis and potential for selection bias, is in itself a key finding of the review.” They also conclude that the use of a forced choice format with cut scores that are significantly above chance on two alternative forced choice tests (e.g., TOMM), raises questions about the utility of the forced choice paradigm, essentially characterizing these PVTs as “floor effect” procedures. As such, McWhirter et al. then argued that failure at above chance cutoffs represents “functional attentional deficit in people with symptoms of any sort,” rather than invalid test performance due to intent to fail.
Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States, in part because PVTs require little effort to perform for persons experiencing significant cognitive impairment (1). Rather, PVTs have been defined as representing invalid performance that is not an accurate representation of actual ability. Continuing to refer to PVTs as “effort tests” allows McWhirter et al. to more easily mischaracterize the tests as sensitive attentional tasks affected by variable “effort” rather than measur...
Show MoreConflict of Interest:
All contributors provide neuropsychological consultation in medicolegal matters. - Published on: 14 August 2020Response to Kemp et al.
We read with interest Kemp and colleagues response to our recent systematic review on Performance Validity Testing (PVT). In response to specific criticisms raised:
1- The searches and data extraction were conducted by one investigator. We agree this is a potential limitation although only if papers were missed, or data was erroneously transcribed, and it can be demonstrated this would have changed the conclusions. Although Kemp and colleagues place great weight on this, the evidence they put forward to support their contention was limited. Of the four citations in their letter, reference 2 and reference 4 were in fact included (see our supplementary tables and our reference 57)(1,2). Reference 3 was, by coincidence, published simultaneously with our manuscript submission and was not available to us(3).
Reference 1 did not fit the terms of our search strategy and was not included although it would have been eligible(4). It was an unblinded study of the ‘coin in hand test’, a brief forced choice screening test for symptom exaggeration, administered to 45 patients with mixed dementias. It found 11% scored at or above a 2 error cut off and the authors proposed a new set of cut offs for interpretation; it was in keeping with our conclusions. We’d be happy to consider any other specific omissions or quality assessment issues not discussed which the authors consider would have altered the conclusions of the review.
2- The authors criticise our understanding...
Show MoreConflict of Interest:
None declared. - Published on: 14 August 2020Response to McWhirter et al
Dear Editor
Response to McWhirter et al (2020):
In their article, Performance validity test failure in clinical populations - a systematic review, McWhirter and colleagues (2020) present the ‘base rates’ of performance validity test (PVT) failure (or what are commonly referred to as effort tests) and offer an analysis of PVT performance from their perspective as neurologists and neuropsychiatrists.
As a group of senior practicing clinical neuropsychologists, we are pleased that they have drawn attention to an important issue, but we have significant concerns about the methodology used and with several of the conclusions drawn within the review. We present this response from the perspective of U.K. neuropsychology practice, and as practitioners involved in research and formulating clinical guidance on the use of PVTs. In preparing this response, we were aware of parallel concerns of our U.S. counterparts (Larrabee et al) but we have submitted separate responses due to the word limit.
The systematic review methodology used by McWhirter et al. has resulted in a limited number of papers being included, and there is no indication of the quality of the studies included. All of the literature search and analytic procedures appear to have been undertaken by one person alone, hence there was no apparent control for human error, bias, omission or inaccurate data extraction. Also, it is unclear to us to what extent McWhirter and colleagues had the knowle...
Show MoreConflict of Interest:
None declared.