The recently published paper ‘Abnormal pain perception is associated with thalamo-cortico-striatal atrophy in C9orf72 expansion carriers in the GENFI cohort’ by Convery et al.[1] draws attention to a topic of great importance in the field of frontotemporal dementia (FTD) research. In this study, Convery and colleagues investigated differences in pain responsiveness within a group of patients with genetic FTD. Changes in pain responsiveness compared to baseline were captured using a scale designed by the group, and patients were scored from 0-3 (0 = no change, 0.5 = questionable or very mild change, 1 = mild change, 2 = moderate change, 3 = severe change). Within the sample, symptomatic C9orf72 mutation carriers (9/31) experienced greater changes in pain responsiveness than symptomatic MAPT (1/10) and GRN (1/24) mutation-carriers or normal controls (1/181). Within the C9orf72 mutation carriers, these changes were associated with thalamo-cortico-striatal atrophy.
This research brings attention to an important but little-investigated clinical feature of FTD. Changes in pain responsiveness, including both increases and decreases, have now been reported in both sporadic and genetic FTD, along with other somatic complaints.[1–4] However, the changes are not widely captured in either clinical or research settings, and the field lacks standardized and objective measurements to do so. The ability to measure changes in pain responsiveness may be a useful clinical marker to di...
The recently published paper ‘Abnormal pain perception is associated with thalamo-cortico-striatal atrophy in C9orf72 expansion carriers in the GENFI cohort’ by Convery et al.[1] draws attention to a topic of great importance in the field of frontotemporal dementia (FTD) research. In this study, Convery and colleagues investigated differences in pain responsiveness within a group of patients with genetic FTD. Changes in pain responsiveness compared to baseline were captured using a scale designed by the group, and patients were scored from 0-3 (0 = no change, 0.5 = questionable or very mild change, 1 = mild change, 2 = moderate change, 3 = severe change). Within the sample, symptomatic C9orf72 mutation carriers (9/31) experienced greater changes in pain responsiveness than symptomatic MAPT (1/10) and GRN (1/24) mutation-carriers or normal controls (1/181). Within the C9orf72 mutation carriers, these changes were associated with thalamo-cortico-striatal atrophy.
This research brings attention to an important but little-investigated clinical feature of FTD. Changes in pain responsiveness, including both increases and decreases, have now been reported in both sporadic and genetic FTD, along with other somatic complaints.[1–4] However, the changes are not widely captured in either clinical or research settings, and the field lacks standardized and objective measurements to do so. The ability to measure changes in pain responsiveness may be a useful clinical marker to differentiate FTD from other neurodegenerative diseases,[4] and, if the C9orf72 results of Convery et al. are replicated, as an indicator of possible genetic underpinnings.
Recent findings raise the possibility that changes in pain responsiveness differ between FTD phenotypes. Increased pain responsiveness has been reported in patients with semantic-variant primary progressive aphasia (svPPA), particularly in those with right-temporal atrophy, which stands in contrast to decreased pain responsiveness observed in behavioral-variant FTD (bvFTD).[2–5] These findings, in conjunction with those of Convery et al., highlight the importance of capturing directionality of change as well as severity. Similarly, analyses of different phenotypes within the FTD spectrum will be critical to broaden the clinical relevance of this research to sporadic FTD, as some phenotypes are rarely genetic (e.g., svPPA). In the Convery et al. paper, the overwhelming majority of symptomatic participants had bvFTD, which is typical for genetic cohorts. However, the extension of this research into sporadic cases raises the exciting question of whether changes in responsiveness to pain can distinguish between FTD phenotypes, which implicate overlapping but distinct neuroanatomical circuits. This question has great theoretical, as well as clinical, importance.
The Convery et al. paper highlights that altered responsiveness to pain was present in symptomatic but not presymptomatic genetic mutation carriers. It thus remains unclear whether altered pain responsiveness is an early feature of the disease or develops later. Elucidating this timeline will clarify the clinical utility of this research: whether it is useful for early diagnosis or for distinguishing between phenotypes after the dementia syndrome has developed.
As we continue to expand this line of research, it will be essential to develop both subjective and objective measurements of pain responsiveness and other somatic changes in patients with FTD. Refining our understanding of these changes has the potential to be useful in clinical and research settings alike.
1 Convery RS, Bocchetta M, Greaves CV, et al. Abnormal pain perception is associated with thalamo-cortico-striatal atrophy in C9orf72 expansion carriers in the GENFI cohort. J Neurol Neurosurg Psychiatry Published Online First: 5 August 2020. doi:10.1136/jnnp-2020-323279
2 Barker MS, Silverman HE, Fremont R, et al. ‘Everything hurts!’ Distress in semantic variant primary progressive aphasia. Cortex J Devoted Study Nerv Syst Behav 2020;127:396–8. doi:10.1016/j.cortex.2020.03.002
3 Snowden JS, Bathgate D, Varma A, et al. Distinct behavioural profiles in frontotemporal dementia and semantic dementia. J Neurol Neurosurg Psychiatry 2001;70:323–32. doi:10.1136/jnnp.70.3.323
4 Fletcher PD, Downey LE, Golden HL, et al. Pain and temperature processing in dementia: a clinical and neuroanatomical analysis. Brain J Neurol 2015;138:3360–72. doi:10.1093/brain/awv276
5 Ulugut Erkoyun H, Groot C, Heilbron R, et al. A clinical-radiological framework of the right temporal variant of frontotemporal dementia. Brain J Neurol 2020;143:2831–43. doi:10.1093/brain/awaa225
We thank White and colleagues for their correspondence on our article(1) and note many of the observations raised are already addressed by our robust study design and discussed in the original manuscript text. Importantly, we are quite clear throughout that this is a study designed to investigate whether there is higher risk of common mental health disorder in former professional soccer players than anticipated from general population controls.
Undoubtedly, there will be physically active individuals in our general population control group, including a number who might have participated in some form of contact sport. However, we would suggest this does not define our over 23,000 matched general population controls as a cohort of ‘non-elite’ athletes, as proposed by White et al. Instead, we would assert this merely underlines their legitimacy as a general population control cohort for comparison with our cohort of almost 8000 former professional soccer players.
Potential study limitations regarding healthy worker effect, illness behavior in former professional soccer players and use of hospitalization datasets are addressed in detail in our manuscript text. Regarding data on duration of hospital stay and therapy, while these might indeed be of interest in follow-on studies regarding illness severity, we would suggest that they are not immediately relevant to a study designed to address risk of common mental health disorder.
We thank White and colleagues for their correspondence on our article(1) and note many of the observations raised are already addressed by our robust study design and discussed in the original manuscript text. Importantly, we are quite clear throughout that this is a study designed to investigate whether there is higher risk of common mental health disorder in former professional soccer players than anticipated from general population controls.
Undoubtedly, there will be physically active individuals in our general population control group, including a number who might have participated in some form of contact sport. However, we would suggest this does not define our over 23,000 matched general population controls as a cohort of ‘non-elite’ athletes, as proposed by White et al. Instead, we would assert this merely underlines their legitimacy as a general population control cohort for comparison with our cohort of almost 8000 former professional soccer players.
Potential study limitations regarding healthy worker effect, illness behavior in former professional soccer players and use of hospitalization datasets are addressed in detail in our manuscript text. Regarding data on duration of hospital stay and therapy, while these might indeed be of interest in follow-on studies regarding illness severity, we would suggest that they are not immediately relevant to a study designed to address risk of common mental health disorder.
As White et al observe, while our data reporting lower risk of hospitalization for common mental health disorder in former professional soccer players might appear ‘surprising’, this is perhaps a reflection of methodological limitations and biases in previous reporting in this issue, as discussed in our text. As such, as a robust study specifically designed to address many previous limitations and minimize biases, we would disagree with White et al’s suggestion that our ‘surprising’ observations are ‘not necessarily a significant contribution’ to this field.
1 Russell ER, McCabe T, Mackay DF, et al Mental health and suicide in former professional soccer players Journal of Neurology, Neurosurgery & Psychiatry Published Online First: 21 July 2020. doi: 10.1136/jnnp-2020-323315
Russell et al. (1) published a retrospective cohort study with a population of former professional soccer players with known high neurodegenerative mortality. Findings showed that they are at lower risk of common mental health disorders and have lower rates of suicide than a matched general population. These findings are surprising and different from previous studies, which have used first-hand clinical accounts of ex-athletes who have lived with neurodegeneration (1). We suggest there may be reasons for this disparity and welcome critical dialogue with the authors of this research.
Cohort Comparison
Russell et al. has compared their soccer cohort with a matched population cohort. However, the matched cohort may also include those who have experienced repetitive head impacts, such as amateur soccer players, rugby players or boxers. Therefore, the study represents differences of elite versus non-elite rather than sport versus non-sport. While Russell recognises the healthy worker effect (2), it may have a greater influence in this study than presented.
Soccer Stoicism
Men’s engagement in health-seeking behaviours has been a long-standing concern in health care and is often attributed to factors such as stigma, hypermasculinity and stoicism (3). Furthermore, working-class sports such as soccer, require the acceptance of pain, suffering, and physical risk, so these players are more likely to ‘suffer in silence’ than the general population (4). Give...
Russell et al. (1) published a retrospective cohort study with a population of former professional soccer players with known high neurodegenerative mortality. Findings showed that they are at lower risk of common mental health disorders and have lower rates of suicide than a matched general population. These findings are surprising and different from previous studies, which have used first-hand clinical accounts of ex-athletes who have lived with neurodegeneration (1). We suggest there may be reasons for this disparity and welcome critical dialogue with the authors of this research.
Cohort Comparison
Russell et al. has compared their soccer cohort with a matched population cohort. However, the matched cohort may also include those who have experienced repetitive head impacts, such as amateur soccer players, rugby players or boxers. Therefore, the study represents differences of elite versus non-elite rather than sport versus non-sport. While Russell recognises the healthy worker effect (2), it may have a greater influence in this study than presented.
Soccer Stoicism
Men’s engagement in health-seeking behaviours has been a long-standing concern in health care and is often attributed to factors such as stigma, hypermasculinity and stoicism (3). Furthermore, working-class sports such as soccer, require the acceptance of pain, suffering, and physical risk, so these players are more likely to ‘suffer in silence’ than the general population (4). Given the effectiveness of masculine socialisation through sport participation, the absence of elevated medical reporting between male athletes and non-athletes does not indicate an actual absence of a larger disease profile. The lack of ex-elite athletes engaging with mental health support at a hospital may rather be indicative of health-avoidance behaviours.
Mental Health Concerns Defined by Hospital Admission
Using hospital admission records as the primary definition for mental health concerns is problematic. Hospital admission is reserved for the most severe acute psychiatric concerns. Therefore, such records miss many mental health concerns that are better managed in primary and community care settings. This may be particularly pertinent for this study, given that many of the sample have diagnosed dementia, and may be fully supported with their neuro-psychiatric needs by health care professionals outside the hospital context.
The Bigger Picture
It appears that only a selective subsection of data, or part of the picture, has been reported. No information on the length of visit to hospital, extent and nature of medical interventions, public health burden, number or frequency of visits by an individual and any further care has been provided. This information may illuminate other explanations for why there is a difference in common mental health disorders for soccer and match control samples.
Concluding thoughts
Russell et al assert research, “… has placed greater emphasis on psychiatric symptomatology in CTE. Nevertheless, data supporting this association are weak”. This study does little to support or contest this position. While the results presented are novel, they are not necessarily a significant contribution to the debate on the relationship between soccer participation, common mental health disorders and other neurological
References
(1) Russell, E. R., McCabe, T., Mackay, D. F., Stewart, K., MacLean, J. A., Pell, J. P., & Stewart, W. (2020). Mental health and suicide in former professional soccer players. Journal of Neurology, Neurosurgery and Psychiatry.
(2) Li CY, Sung FC. A review of the healthy worker effect in occupational epidemiology. Occup Med 1999;49:225–9.
(3) Wang Y, Hunt K, Nazareth I, Freemantle N, Petersen I. Do men consult less than women? An analysis of routinely collected UK general practice data. BMJ Open. 2013;3(8):e003320
(4) Anderson, E., & White, A. (2017). Sport, theory and social problems: A critical introduction. Routledge.
Jacobs et al. investigated the association of environmental factors and prodromal features with incident Parkinson's disease (PD) with special reference to the interaction of genetic factors [1]. The authors constructed polygenic risk scores (PRSs) for the risk assessment. Family history of PD, family history of dementia, non-smoking, low alcohol consumption, depression, daytime somnolence, epilepsy and earlier menarche were selected as PD risk factors. The adjusted odds ratio (OR) (95% confidence interval [CI]) of the highest 10% of PRSs for the risk of PD was 3.37 (2.41 to 4.70). I have some concerns about their study.
Regarding risk/protective factors of PD, Daniele et al. conducted a case-control study to performed a simultaneous evaluation of potential factors of PD [2]. Among 31 environmental and lifestyle factors, 9 factors were extracted by multivariate analysis. The adjusted OR (95% CI) of coffee consumption, smoking, physical activity, family history of PD, dyspepsia, exposure to pesticides, metals, and general anesthesia were 0.6 (0.4-0.9), 0.7 (0.6-0.9), 0.8 (0.7-0.9), 3.2 (2.2- 4.8), 1.8 (1.3-2.4), 2.3 (1.3- 4.2), 5.6 (2.3-13.7), 2.8 (1.5-5.4), and 6.1 (2.9-12.7), respectively. Family history of PD and non-smoking were common risk factors, which had also been reported by several prospective studies.
Regarding smoking, Angelopoulou et al. investigated the association between environmental factors and PD subtypes (early-onset, mid-and-late on...
Jacobs et al. investigated the association of environmental factors and prodromal features with incident Parkinson's disease (PD) with special reference to the interaction of genetic factors [1]. The authors constructed polygenic risk scores (PRSs) for the risk assessment. Family history of PD, family history of dementia, non-smoking, low alcohol consumption, depression, daytime somnolence, epilepsy and earlier menarche were selected as PD risk factors. The adjusted odds ratio (OR) (95% confidence interval [CI]) of the highest 10% of PRSs for the risk of PD was 3.37 (2.41 to 4.70). I have some concerns about their study.
Regarding risk/protective factors of PD, Daniele et al. conducted a case-control study to performed a simultaneous evaluation of potential factors of PD [2]. Among 31 environmental and lifestyle factors, 9 factors were extracted by multivariate analysis. The adjusted OR (95% CI) of coffee consumption, smoking, physical activity, family history of PD, dyspepsia, exposure to pesticides, metals, and general anesthesia were 0.6 (0.4-0.9), 0.7 (0.6-0.9), 0.8 (0.7-0.9), 3.2 (2.2- 4.8), 1.8 (1.3-2.4), 2.3 (1.3- 4.2), 5.6 (2.3-13.7), 2.8 (1.5-5.4), and 6.1 (2.9-12.7), respectively. Family history of PD and non-smoking were common risk factors, which had also been reported by several prospective studies.
Regarding smoking, Angelopoulou et al. investigated the association between environmental factors and PD subtypes (early-onset, mid-and-late onset, familial and sporadic) [3]. The adjusted OR (95% CI) of smoking for PD overall, mid-and-late onset PD, familial PD, and sporadic PD were 0.48 (0.35-0.67), 0.46 (0.32-0.66), 0.53 (0.34-0.83) and 0.46 (0.32-0.65), respectively. In addition, there was an inverse linear association of PD with pack-years of smoking, except for early-onset PD. Additionally, the adjusted OR (95% CI) of coffee consumption for PD overall, early-onset PD, and familial PD were 0.52 (0.29-0.91), 0.16 (0.05-0.53) and 0.36 (0.17-0.75), respectively. Although the mechanism of the association might be difficult to be confirmed, smokers have a trend of high prevalence of coffee consumption.
Finally, Li et al. evaluated whether the genetic profile might modify PD development and cerebrospinal fluid (CSF) pathological biomarkers by using single nucleotide polymorphisms (SNPs) and PRSs [4]. Some SNPs had significant correlations with PD, and PRSs could predict PD risk and the age at onset. In contrast, the CSF α-synuclein level had no significant correlation with the PRSs in normal subjects. Anyway, further studies are needed to verify PD determinants and gene-environment interactions.
References
1. Jacobs BM, Belete D, Bestwick J, et al. Parkinson's disease determinants, prediction and gene-environment interactions in the UK Biobank. J Neurol Neurosurg Psychiatry. 2020 Oct;91(10):1046-1054.
2. Daniele B, Roberta P, Andrea F, et al. Risk factors of Parkinson's disease: Simultaneous assessment, interactions and etiological subtypes. Neurology. 2020 Sep 17 doi: 10.1212/WNL.0000000000010813
3. Angelopoulou E, Bozi M, Simitsi AM, et al. The relationship between environmental factors and different Parkinson's disease subtypes in Greece: Data analysis of the Hellenic Biobank of Parkinson's disease. Parkinsonism Relat Disord. 2019 Oct;67:105-112.
4. Li WW, Fan DY, Shen YY, et al. Association of the polygenic risk score with the incidence risk of Parkinson's disease and cerebrospinal fluid α-Synuclein in a Chinese cohort. Neurotox Res. 2019 Oct;36(3):515-522.
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has become one of the most severe pandemic the world has ever seen. Based on data from Johns Hopkins University, around 26.3 million cases have been detected and around 0.9 million patients have died of COVID-19 globally as of September 04, 2020. The neurological sequelae of COVID-19 include a para/post-infectious, immune or antibody-mediated phenomenon, which classically manifests as Guillain-Barré syndrome (GBS).[1, 2]
We read the systematic review by Uncini et al with great interest. In an instant systematic review, the authors reported 42 patients of GBS associated with COVID-19 from 33 retrieved articles. All of these articles had been reported from 13 developed countries.[3] The authors mentioned regarding the chronology of publication of case reports/series starting from China followed by Iran, France, Italy, Spain and USA which seemed to be related to the track of SARS-CoV-2 infection spread. However, the authors did not discuss why such cases/series had remained under-reported from developing countries. A comprehensive, advanced search of PubMed using the terms ‘SARS-CoV-2’ OR ‘COVID-19’ AND ‘Guillain-Barré syndrome’ on September 04, 2020, led to retrieval of two additional articles from developing countries, one each from Brazil and Morocco.[4 ,5] As of September 04, 2020, Brazil and India had the 2nd and 3rd highest number of COVI...
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has become one of the most severe pandemic the world has ever seen. Based on data from Johns Hopkins University, around 26.3 million cases have been detected and around 0.9 million patients have died of COVID-19 globally as of September 04, 2020. The neurological sequelae of COVID-19 include a para/post-infectious, immune or antibody-mediated phenomenon, which classically manifests as Guillain-Barré syndrome (GBS).[1, 2]
We read the systematic review by Uncini et al with great interest. In an instant systematic review, the authors reported 42 patients of GBS associated with COVID-19 from 33 retrieved articles. All of these articles had been reported from 13 developed countries.[3] The authors mentioned regarding the chronology of publication of case reports/series starting from China followed by Iran, France, Italy, Spain and USA which seemed to be related to the track of SARS-CoV-2 infection spread. However, the authors did not discuss why such cases/series had remained under-reported from developing countries. A comprehensive, advanced search of PubMed using the terms ‘SARS-CoV-2’ OR ‘COVID-19’ AND ‘Guillain-Barré syndrome’ on September 04, 2020, led to retrieval of two additional articles from developing countries, one each from Brazil and Morocco.[4 ,5] As of September 04, 2020, Brazil and India had the 2nd and 3rd highest number of COVID-19 cases (Johns Hopkins data), yet only one case of COVID-19-associated GBS had been reported from Brazil and no case had been reported from India. An Italian study reported a 5.4-fold increase in the incidence of GBS during this pandemic.[2] In contrast, the number of GBS cases in Bangladesh, a developing country, had decreased during the pandemic (personal communication with country coordinator (Bangladesh) of International GBS Outcome Study (IGOS), August 2020), even though Bangladesh reported the highest number of GBS cases worldwide in the IGOS.
The lack of or inadequate testing facilities and structural barriers to getting tested for COVID-19, i.e., excessive waiting time or lack of one-stop services may contribute to under-reporting of COVID-19-associated GBS in developing countries. We assume that some patients with GBS in developing countries did not seek hospital care due to the lock-down, lack of public transport services, social stigma and fear of nosocomial infection. However, it requires further exploration if truly there were no COVID-19-associated GBS cases in developing countries or these cases had remained under-reported.
Patients of GBS associated with COVID-19 may not present with typical symptoms. For instance, the first reported case of GBS associated with SARS-CoV-2 infection was para-infectious, rather than the classical post-infectious presentation.[1] Some cases of GBS may also be negative for SARS-CoV-2 in reverse transcriptase polymerase chain reaction (RT-PCR) tests, as reported by an Italian study.[2] Moreover, if a patient develops GBS long after the acute infection subsides, RT-PCR testing for SARS-CoV-2 may also be negative. Analysis of serum IgG and IgM for SARS-CoV-2 may help to confirm or exclude antecedent SARS-CoV-2 infection, though such tests may not be available in developing countries—which may also result in underreporting of COVID-19-associated GBS.
The high number of reported cases of COVID-19-associated GBS worldwide provides evidence of a possible association between GBS and COVID-19. Therefore, during this pandemic, clinicians and neurologists should be aware that patients presenting with GBS, even in the absence of cough, fever, respiratory distress or any systemic symptoms, may represent the first manifestation of COVID-19. During this global pandemic, the differential diagnosis of COVID-19-associated GBS should be considered for all cases of GBS, until and unless confirmed otherwise. We call for attention of the clinicians, especially neurologists in developing countries to strengthen surveillance system for identification, reporting and better management of COVID-19-associated GBS.
References
1. Zhao H, Shen D, Zhou H, et al. Guillain-Barré syndrome associated with SARS-CoV-2 infection: causality or coincidence? Lancet Neurol 2020;19(5):383-84. doi: 10.1016/s1474-4422(20)30109-5
2. Gigli GL, Bax F, Marini A, et al. Guillain-Barré syndrome in the COVID-19 era: just an occasional cluster? J Neurol 2020:1-3. doi: 10.1007/s00415-020-09911-3
3. Uncini A, Vallat J-M, Jacobs BC. Guillain-Barré syndrome in SARS-CoV-2 infection: an instant systematic review of the first six months of pandemic. Journal of Neurology, Neurosurgery & Psychiatry 2020:jnnp-2020-324491. doi: 10.1136/jnnp-2020-324491
4. El Otmani H, El Moutawakil B, Rafai MA, et al. Covid-19 and Guillain-Barré syndrome: More than a coincidence! Revue neurologique 2020;176(6):518-19. doi: 10.1016/j.neurol.2020.04.007
5. Frank CHM, Almeida TVR, Marques EA, et al. Guillain-Barré Syndrome Associated with SARS-CoV-2 Infection in a Pediatric Patient. J Trop Pediatr 2020 doi: 10.1093/tropej/fmaa044
We read Larrabee and colleagues’ e-letter response to our systematic review on Performance Validity Testing (PVT). Whilst we welcome debate, and we recognize that some clinicians will disagree with our conclusions, we were disappointed that they misrepresented our paper in formulating their response:
1. The authors state “Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States..”. In reality we used the term “effort test” only twice in our paper; in the introduction: “(PVTs), also historically called effort tests” and once in the methods in describing our search terms. By contrast we use the term PVT on 45 occasions.
2. We are concerned that they then go on to misrepresent the results of our review. We found a wide variation in results in different clinical groups and in different tests. We noted that failure rates for some groups and some tests exceeds 25%. We did not conclude that all failure rates were as high as this, but rather that failing a PVT was not a rare phenomenon but was reasonably common in a range of clinical groups.
We presented results to support our conclusion that the PVT literature is problematic with regards to blinding to diagnosis and potential for selection bias.
We also uphold our speculation that an alternate explanation for failure on forced choice tests at above chance cutoffs may result from attentional deficit related to other symptoms. W...
We read Larrabee and colleagues’ e-letter response to our systematic review on Performance Validity Testing (PVT). Whilst we welcome debate, and we recognize that some clinicians will disagree with our conclusions, we were disappointed that they misrepresented our paper in formulating their response:
1. The authors state “Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States..”. In reality we used the term “effort test” only twice in our paper; in the introduction: “(PVTs), also historically called effort tests” and once in the methods in describing our search terms. By contrast we use the term PVT on 45 occasions.
2. We are concerned that they then go on to misrepresent the results of our review. We found a wide variation in results in different clinical groups and in different tests. We noted that failure rates for some groups and some tests exceeds 25%. We did not conclude that all failure rates were as high as this, but rather that failing a PVT was not a rare phenomenon but was reasonably common in a range of clinical groups.
We presented results to support our conclusion that the PVT literature is problematic with regards to blinding to diagnosis and potential for selection bias.
We also uphold our speculation that an alternate explanation for failure on forced choice tests at above chance cutoffs may result from attentional deficit related to other symptoms. We were explicit that there are likely to be many reasons that a person might fail a PVT, and invite further research and discussion of the various causes of PVT failure that might help to explain the observed variation in failure rates across a range of diverse clinical conditions.
3. We did not ignore the importance of using multiple PVTs. On the contrary we explicitly stated “the manner in which we have described PVT failure rates does not necessarily reflect how they are used in practice by skilled neuropsychologists” and that “Guidance documents recommend that multiple performance validity measures should be used, including both free-standing and embedded indicators ...” But that was not the subject of the study which was the performance of individual tests.
4. They allege that we did not mention any of the previous published meta-analyses summarizing data from multiple investigations. However, the Sollman and Berry review they suggest has a quite different focus from our study, and the majority of studies included were of mixed clinical populations (e.g. ‘psychiatric’, or ‘neurological’ or ‘head injury’ without severity specified)(1). It was not relevant to our question.
Larrabee et al. raise three valid points of criticism:
1. That we were inaccurate in our portrayal of the Novitski paper. We have reviewed the Novitski et al. paper and although % fail rate was accurately transcribed we agree the study was erroneously included, as the included mTBI patients had already failed the WMT(2). Although interestingly, 36% of the amnestic MCI sample in this paper (who were not administered the WMT) also failed RBANS digit span <9. Of note, the mTBI group in the Novitski paper was not included in our Figure 2, due to the rather higher-than-usual cutoff score of 9. This isolated error does not alter our overall conclusions.
2. They challenge our statement that there is little consensus amongst experts on the use of PVTs. However Kemp et al, in further correspondence with us, stated explicitly that the view of these tests as ones of malingering was old fashioned and largely no longer accepted, whereas Larrabee and colleagues refer to them as just that. The British Psychological Society refer to them as ‘effort’ tests throughout their guidance whereas Larrabee and colleagues criticize such nomenclature. It seems to us even from the responses to our paper that there is a lack of consensus.
3. Finally, while Larrabee et al. report that our results are “not representative of the research database” we counter with an assurance that this data was just that: a systematic extraction of all available date from the research base on PVTs.
2. Novitski J, Steele S, Karantzoulis S, Randolph C. The Repeatable Battery for the Assessment of Neuropsychological Status Effort scale. Arch Clin Neuropsychol [Internet]. 2012;27(2 PG-190–195):190–5.
McWhirter et al. (2020) reviewed the published literature on Performance Validity Tests (PVTs), concluding that high false positive (Fp) rates were common in clinical (non-forensic) samples, exceeding 25%. In their discussion, they stated: “The poor quality of the PVT evidence base examined here, with a lack of blinding to diagnosis and potential for selection bias, is in itself a key finding of the review.” They also conclude that the use of a forced choice format with cut scores that are significantly above chance on two alternative forced choice tests (e.g., TOMM), raises questions about the utility of the forced choice paradigm, essentially characterizing these PVTs as “floor effect” procedures. As such, McWhirter et al. then argued that failure at above chance cutoffs represents “functional attentional deficit in people with symptoms of any sort,” rather than invalid test performance due to intent to fail.
Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States, in part because PVTs require little effort to perform for persons experiencing significant cognitive impairment (1). Rather, PVTs have been defined as representing invalid performance that is not an accurate representation of actual ability. Continuing to refer to PVTs as “effort tests” allows McWhirter et al. to more easily mischaracterize the tests as sensitive attentional tasks affected by variable “effort” rather than measur...
McWhirter et al. (2020) reviewed the published literature on Performance Validity Tests (PVTs), concluding that high false positive (Fp) rates were common in clinical (non-forensic) samples, exceeding 25%. In their discussion, they stated: “The poor quality of the PVT evidence base examined here, with a lack of blinding to diagnosis and potential for selection bias, is in itself a key finding of the review.” They also conclude that the use of a forced choice format with cut scores that are significantly above chance on two alternative forced choice tests (e.g., TOMM), raises questions about the utility of the forced choice paradigm, essentially characterizing these PVTs as “floor effect” procedures. As such, McWhirter et al. then argued that failure at above chance cutoffs represents “functional attentional deficit in people with symptoms of any sort,” rather than invalid test performance due to intent to fail.
Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States, in part because PVTs require little effort to perform for persons experiencing significant cognitive impairment (1). Rather, PVTs have been defined as representing invalid performance that is not an accurate representation of actual ability. Continuing to refer to PVTs as “effort tests” allows McWhirter et al. to more easily mischaracterize the tests as sensitive attentional tasks affected by variable “effort” rather than measures of performance validity that are failed due to invalid test performance.
As clinicians and investigators in PVT research, we found errors in their study analysis, and insufficient discussion of the research on mitigating factors related to Fp errors. First, the test error rates reported by McWhirter et al. are not accurate. For example, the Novitski et al. paper (McWhirter et al. reference 26) was cited as showing a 52% Fp rate on RBANS Digit Span < 9 in mild traumatic brain injury (mTBI). However, Novitski et al. used this mTBI sample, who also failed the WMT, as the criterion group representing non-credible (invalid) performance. Consequently, 52% failure represents Sensitivity (to invalid performance) rather than the Fp rate. Importantly, McWhirter et al. do not mention any of the published meta-analyses summarizing data from multiple investigations. For example, Sollman and Berry (2) reported a Specificity of .90 corresponding to a 10% Fp rate, based on 47 samples comparing 5 PVTs, administered to 1,787 participants.
Larrabee (3), studying moderate and severe TBI groups, reported Fp rates ranging from .065 to .138 for 4 PVTs and 1 SVT in this TBI sample. Moreover, Larrabee found no differences in mean performance on these 5 validity measures for a group of primarily mTBI cases performing significantly < chance on a two alternative forced choice test, the PDRT, vs a similar group failing the PDRT at an above-chance cutoff, plus failing one additional PVT. The same was true for mean performance on sensitive measures of word-finding, processing speed, verbal and visual learning and memory. These data support the equivalence of definite invalid performance (significantly < chance) and probable invalid performance (defined by ≥ 2 PVT failures), contradicting the description by McWhirter et al. of PVT failure at above chance levels as representing a functional attentional problem. In other words, multiple PVT failure suggests intentional underperformance, provided there is no evidence of pronounced neurologic, psychiatric or developmental factors that could account for such failure, such as Alzheimer-type Dementia (AD), schizophrenia, or Intellectual Deficit. It is the combined improbability of multiple PVT and SVT results, in the context of an external incentive, without any viable alternative explanation, that establishes the intent (to fail) of the examinee (1).
McWhirter et al. ignored the importance of using multiple PVTs to improve diagnostic accuracy. Data from Loring et al. (see McWhirter, reference 8) showed dramatic reduction in Fp rates when PVTs are used in combination. For example, a Reliable Digit Span (RDS) score of ≤ 6 had a Fp of 13% in AD. A Rey AVLT Recognition score of ≤ 9 had an Fp of 70% in AD. Yet, by requiring both an RDS of ≤ 6 AND a Rey AVLT Recognition score of ≤ 9 the Fp rate was dramatically lowered to 5% for AD and 1% for amnestic MCI. Additionally, Loring et al. provided Fp rates (on RDS and AVLT Recognition) by levels of performance on Rey AVLT delayed free recall, and Trail Making B. These data showed levels of memory and processing speed that were sufficient to result in low Fp rates consistent with the presence of preserved native ability to perform validly on RDS and AVLT Recognition. These results support the widely held practice of employing multiple PVTs in the individual case in order to control the Fp error and enhance detection of invalid test performance (also see 4). As the rate of PVT failure increases, the likelihood of Fp identification decreases. Importantly, Bianchini et al. (5) showed that the rate of failure for 5 PVTs correlated with the degree of external incentive, demonstrating a dose effect corroborating a causal relationship between PVT failure and potential compensation.
Curiously, McWhirter et al. contend there is little consensus amongst experts as to how PVTs are used in the same paragraph that they reference the American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias and malingering (see McWhirter reference 58). This document represents an evidence based assessment of performance and symptom validity that is currently under revision for publication. This revision supports the conclusions of the original consensus conference statement, with further information regarding the use of multiple PVTs and SVTs. These papers show there is substantial support for validated PVTs and SVTs, with low per-test Fp errors of 10% or less, and enhanced diagnostic accuracy gained through use of multiple validity measures.
In closing, the Fp rates reported by McWhirter et al. are not representative of the research database that characterizes modern PVT and SVT investigations. We agree with similar observations by our United Kingdom colleagues (Kemp et al.).
Glenn J. Larrabee, Ph.D. (USA)
Kyle B. Boone, Ph.D. (USA)
Kevin J. Bianchini, Ph.D. (USA)
Martin L. Rohling, Ph.D. (USA)
Elisabeth M. S. Sherman, Ph.D. (Canada)
References
1. Larrabee GJ. Performance validity and symptom validity in neuropsychological assessment. J Int Neuropsychol Soc 2012; 18:625-30.
2. Sollman MJ, Berry DTR. Detection of inadequate effort on neuropsychological testing: a meta-analytic update and extension. Arch Clin Neuropsychol 2011; 26: 774-89.
3. Larrabee GJ Detection of malingering using atypical performance patterns on standard neuropsychological tests. Clin Neuropsychol 2003; 17: 410-25.
4. Larrabee GJ. False-positive rates associated with the use of multiple performance and symptom validity tests. Arch Clin Neuropsychol 2014; 29: 364-73.
5. Bianchini KJ, Curtis KL, Greve KW. Compensation and malingering in traumatic brain injury: A dose response relationship? Clin Neuropsychol 2006; 20: 831-847.
We read with interest the commentary from Prof Gupta (1). Migraine is a complex and heterogeneous disorder with multifactorial pathogenesis (2). In fact, it is a well-known fact that both genetic and environmental factors are involved in the etiopathogenesis of migraine (2). Conversely, hemiplegic migraine (HM) is a complex monogenic disorder related to a mutation in genes encoding for ion transporters (3). Even if many consider HM as a subtype of migraine, this condition offers insight in migraine pathophysiology, especially in the case of migraine with aura, as well as in other conditions overlapping between headache and epilepsy, such as the so called “Ictal Epileptic Headache”, a new concept defined in the last decade (4–6).
Our knowledge on the pathophysiology of both migraine and HM is evolving with new insights coming from the last years (3). However, we partially agree that ….“No systemic influence can explain the characteristic lateralizing headache of migraine, unilateral, bilateral, side-shifting or side-locked” (7,8). Interestingly, new data have come from neurophysiology: hyperexcitability/dysexcitability (5) in migraine has been clearly demonstrated in migraine sufferers with more prominent results especially in migraine with aura (5,9,10). These data could make a reasonable link between the genesis of hyperexcitability/dysexcitability of multisensory cortices, cortical spreading depression (CSD) and the “headache” phase of migraine, mediated by the tri...
We read with interest the commentary from Prof Gupta (1). Migraine is a complex and heterogeneous disorder with multifactorial pathogenesis (2). In fact, it is a well-known fact that both genetic and environmental factors are involved in the etiopathogenesis of migraine (2). Conversely, hemiplegic migraine (HM) is a complex monogenic disorder related to a mutation in genes encoding for ion transporters (3). Even if many consider HM as a subtype of migraine, this condition offers insight in migraine pathophysiology, especially in the case of migraine with aura, as well as in other conditions overlapping between headache and epilepsy, such as the so called “Ictal Epileptic Headache”, a new concept defined in the last decade (4–6).
Our knowledge on the pathophysiology of both migraine and HM is evolving with new insights coming from the last years (3). However, we partially agree that ….“No systemic influence can explain the characteristic lateralizing headache of migraine, unilateral, bilateral, side-shifting or side-locked” (7,8). Interestingly, new data have come from neurophysiology: hyperexcitability/dysexcitability (5) in migraine has been clearly demonstrated in migraine sufferers with more prominent results especially in migraine with aura (5,9,10). These data could make a reasonable link between the genesis of hyperexcitability/dysexcitability of multisensory cortices, cortical spreading depression (CSD) and the “headache” phase of migraine, mediated by the trigeminovascular system and CGRP (11). Moreover, data from the mice model of HM confirm the role of CSD in the pathophysiology of migraine (3). CACN1A mutations result in gain-of-function effects, with increased Ca2+ influx and enhanced glutamate release at cortical synapses causing an altered excitatory-inhibitory balance and increased susceptibility to CSD (12,13). Similarly, studies from mice models have supported that mutations in the ATP1A2 (14,15) and SCN1A (16) genes can lead to increased the propensity to CSD.
Finally we have a last consideration regarding the therapy of HM. In our review we summarized literature on diagnostic and therapeutic aspects of HM to offer to the reader an updated data on the management of this rare disease. However, for many drug described the evidence is poor and the current therapeutic recommendations are based on isolated reports (3).
Authors
Vincenzo Di Stefano1, Marianna Gabriella Rispoli2, Noemi Pellegrino3, Alessandro Graziosi3, Eleonora Rotondo 3, Christian Napoli 4, Daniela Pietrobon5, Filippo Brighina1 & Pasquale Parisi 6 *
1 Department of Biomedicine, Neuroscience and advanced Diagnostic, University of Palermo, Palermo, Italy
2 Department of Neuroscience, Imaging and Clinical Science, “G. d’Annunzio”, Chieti, Italy
3 Department of Paediatrics, “G. d’Annunzio” University, Chieti, Italy
4 Department of Medical Surgical Sciences and Translational Medicine, Faculty of Medicine & Psychology, “Sapienza” University, c/o Sant'Andrea Hospital, Rome, Italy
5Department of Biomedical Sciences and Padova Neuroscience Center, University of Padova, Padova, Italy.
6 Child Neurology, NESMOS Department, Faculty of Medicine & Psychology, “Sapienza” University, c/o Sant'Andrea Hospital, Rome, Italy
*Correspondence: Prof. Pasquale Parisi MD, PhD
Child Neurology, Pediatric Headache, Chair of Pediatrics, NESMOS Department, Faculty of Medicine & Psychology, Sapienza University, Via Di Grottarossa, 1035–1039, 00189, Rome, Italy
e-mail: pasquale.parisi@uniroma1.it;
1. Gupta VK. Hemiplegic migraine, genetic mutations, and cortical spreading depression: a presumed nexus that defies scientific logic. JNNP. 2020;DOI:10.131.
2. Dodick DW. A Phase-by-Phase Review of Migraine Pathophysiology. Headache. 2018;58.
3. Di Stefano V, Rispoli MG, Pellegrino N, Graziosi A, Rotondo E, Napoli C, et al. Diagnostic and therapeutic aspects of hemiplegic migraine. J Neurol Neurosurg Psychiatry [Internet]. 2020 May 19;jnnp-2020-322850. Available from: http://jnnp.bmj.com/lookup/doi/10.1136/jnnp-2020-322850
4. Parisi P, Striano P, Trenité DGKN, Verrotti A, Martelletti P, Villa MP, et al. “Ictal epileptic headache”: Recent concepts for new classifications criteria. Vol. 32, Cephalalgia. 2012. p. 723–4.
5. Parisi P, Striano P, Negro A, Martelletti P, Belcastro V. Ictal epileptic headache: An old story with courses and appeals. Vol. 13, Journal of Headache and Pain. 2012. p. 607–13.
6. Piccioli M, Parisi P, Tisei P, Villa MP, Buttinelli C, Kasteleijn-Nolst Trenité DGA. Ictal headache and visual sensitivity. Cephalalgia. 2009 Feb;29(2):194–203.
7. Gupta VK. Cortical-spreading depression: At the razor’s edge of scientific logic. Vol. 12, Journal of Headache and Pain. 2011. p. 45–6.
8. Gupta VK. Nitric oxide and migraine: another systemic influence postulated to explain a lateralizing disorder. Eur J Neurol. 1996 Mar;3(2):172–3.
9. Brighina F, Bolognini N, Cosentino G, MacCora S, Paladino P, Baschi R, et al. Visual cortex hyperexcitability in migraine in response to sound-induced flash illusions. Neurology. 2015 May 19;84(20):2057–61.
10. Brighina F, Palermo A, Daniele O, Aloisio A, Fierro B. High-frequency transcranial magnetic stimulation on motor cortex of patients affected by migraine with aura: A way to restore normal cortical excitability? Cephalalgia. 2010 Jan;30(1):46–52.
11. Edvinsson L. The Trigeminovascular Pathway: Role of CGRP and CGRP Receptors in Migraine. Headache. 2017;57.
12. Tottene A, Conti R, Fabbro A, Vecchia D, Shapovalova M, Santello M, et al. Enhanced Excitatory Transmission at Cortical Synapses as the Basis for Facilitated Spreading Depression in CaV2.1 Knockin Migraine Mice. Neuron. 2009 Mar 12;61(5):762–73.
13. Van Den Maagdenberg AMJM, Pietrobon D, Pizzorusso T, Kaja S, Broos LAM, Cesetti T, et al. A Cacna1a knockin migraine mouse model with increased susceptibility to cortical spreading depression. Neuron. 2004 Mar 4;41(5):701–10.
14. Capuani C, Melone M, Tottene A, Bragina L, Crivellaro G, Santello M, et al. Defective glutamate and K + clearance by cortical astrocytes in familial hemiplegic migraine type 2 . EMBO Mol Med. 2016 Aug;8(8):967–86.
15. Leo L, Gherardini L, Barone V, de Fusco M, Pietrobon D, Pizzorusso T, et al. Increased susceptibility to cortical spreading depression in the mouse model of Familial hemiplegic migraine type 2. PLoS Genet. 2011 Jun;7(6).
16. Jansen NA, Dehghani A, Linssen MML, Breukel C, Tolner EA, van den Maagdenberg AMJM. First FHM3 mouse model shows spontaneous cortical spreading depolarizations. Ann Clin Transl Neurol. 2020 Jan 1;7(1):132–8.
We read with interest Kemp and colleagues response to our recent systematic review on Performance Validity Testing (PVT). In response to specific criticisms raised:
1- The searches and data extraction were conducted by one investigator. We agree this is a potential limitation although only if papers were missed, or data was erroneously transcribed, and it can be demonstrated this would have changed the conclusions. Although Kemp and colleagues place great weight on this, the evidence they put forward to support their contention was limited. Of the four citations in their letter, reference 2 and reference 4 were in fact included (see our supplementary tables and our reference 57)(1,2). Reference 3 was, by coincidence, published simultaneously with our manuscript submission and was not available to us(3).
Reference 1 did not fit the terms of our search strategy and was not included although it would have been eligible(4). It was an unblinded study of the ‘coin in hand test’, a brief forced choice screening test for symptom exaggeration, administered to 45 patients with mixed dementias. It found 11% scored at or above a 2 error cut off and the authors proposed a new set of cut offs for interpretation; it was in keeping with our conclusions. We’d be happy to consider any other specific omissions or quality assessment issues not discussed which the authors consider would have altered the conclusions of the review.
We read with interest Kemp and colleagues response to our recent systematic review on Performance Validity Testing (PVT). In response to specific criticisms raised:
1- The searches and data extraction were conducted by one investigator. We agree this is a potential limitation although only if papers were missed, or data was erroneously transcribed, and it can be demonstrated this would have changed the conclusions. Although Kemp and colleagues place great weight on this, the evidence they put forward to support their contention was limited. Of the four citations in their letter, reference 2 and reference 4 were in fact included (see our supplementary tables and our reference 57)(1,2). Reference 3 was, by coincidence, published simultaneously with our manuscript submission and was not available to us(3).
Reference 1 did not fit the terms of our search strategy and was not included although it would have been eligible(4). It was an unblinded study of the ‘coin in hand test’, a brief forced choice screening test for symptom exaggeration, administered to 45 patients with mixed dementias. It found 11% scored at or above a 2 error cut off and the authors proposed a new set of cut offs for interpretation; it was in keeping with our conclusions. We’d be happy to consider any other specific omissions or quality assessment issues not discussed which the authors consider would have altered the conclusions of the review.
2- The authors criticise our understanding of how PVTs are used in clinical practice, stating that PVTs should not interpreted on a stand-alone basis but in combination as part of a wider assessment. We agree and made that point explicitly in the paper. Nonetheless an understanding of the accuracy of individual tests remains of key importance in the weight accorded to individual tests in that wider assessment. They state that the way we presented single test failure rates is not the way the tests should be used. Again, we agree and pointed this out in the paper. However, they also misrepresent us, as we did not score or interpret the tests ourselves, nor did we conflate different forms of testing - we reported all the available data as the authors presented it.
3- The third point they make is that we are not saying anything new. We agree in as much as we have methodically documented and grouped in one paper data that was in the public domain. In particular, the authors suggest that ‘base rate failure is well understood’ but in our experience that is not the case; indeed the British Psychological Society’s own guidelines state “Further evidence on UK base rates of cognitive impairment and failure on effort tests in a range of clinical presentations and service settings is needed”(2). There are no other papers that synthesise this data in clinical populations to give readers an overview of these base rates. More importantly, the evidence we found, showed a wide variety of use and interpretation of PVTs. Kemp and colleagues go on to describe how studies should ideally be done to compare clinical populations. We agree and discuss this in our closing remarks. The problem is that evidence to date falls far short of this ideal.
As we made clear in the paper, we agree that PVTs may be useful in the correct context and with an understanding of their limitations. We were not “dismissive” of the developing PVT literature and we believe this should be a literature open to scrutiny by all.
1. Sieck BC, Smith MM, Duff K, Paulsen JS, Beglinger LJ. Symptom validity test performance in the Huntington Disease Clinic. Arch Clin Neuropsychol. 2013;28(2 PG-135–43):135–43.
2. British Psychological Society. Assessment of Effort in Clinical Testing of Cognitive Functioning for Adults. 2009.
3. Sherman EMS, Slick DJ, Iverson GL. Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria. Arch Clin Neuropsychol. 2020;00:1–30.
4. Schroeder RW, Peck CP, Buddin WH, Heinrichs RJ, Baade LE. The Coin-in-the-Hand Test and Dementia. Cogn Behav Neurol. 2012 Sep;25(3):139–43.
In their article, Performance validity test failure in clinical populations - a systematic review, McWhirter and colleagues (2020) present the ‘base rates’ of performance validity test (PVT) failure (or what are commonly referred to as effort tests) and offer an analysis of PVT performance from their perspective as neurologists and neuropsychiatrists.
As a group of senior practicing clinical neuropsychologists, we are pleased that they have drawn attention to an important issue, but we have significant concerns about the methodology used and with several of the conclusions drawn within the review. We present this response from the perspective of U.K. neuropsychology practice, and as practitioners involved in research and formulating clinical guidance on the use of PVTs. In preparing this response, we were aware of parallel concerns of our U.S. counterparts (Larrabee et al) but we have submitted separate responses due to the word limit.
The systematic review methodology used by McWhirter et al. has resulted in a limited number of papers being included, and there is no indication of the quality of the studies included. All of the literature search and analytic procedures appear to have been undertaken by one person alone, hence there was no apparent control for human error, bias, omission or inaccurate data extraction. Also, it is unclear to us to what extent McWhirter and colleagues had the knowle...
In their article, Performance validity test failure in clinical populations - a systematic review, McWhirter and colleagues (2020) present the ‘base rates’ of performance validity test (PVT) failure (or what are commonly referred to as effort tests) and offer an analysis of PVT performance from their perspective as neurologists and neuropsychiatrists.
As a group of senior practicing clinical neuropsychologists, we are pleased that they have drawn attention to an important issue, but we have significant concerns about the methodology used and with several of the conclusions drawn within the review. We present this response from the perspective of U.K. neuropsychology practice, and as practitioners involved in research and formulating clinical guidance on the use of PVTs. In preparing this response, we were aware of parallel concerns of our U.S. counterparts (Larrabee et al) but we have submitted separate responses due to the word limit.
The systematic review methodology used by McWhirter et al. has resulted in a limited number of papers being included, and there is no indication of the quality of the studies included. All of the literature search and analytic procedures appear to have been undertaken by one person alone, hence there was no apparent control for human error, bias, omission or inaccurate data extraction. Also, it is unclear to us to what extent McWhirter and colleagues had the knowledge to determine what data constituted PVT failure, since no neuropsychologists appear to have been involved in their paper.
Whilst we welcome their scrutiny of PVT performance across a range of clinical settings and their drawing attention to the important matter of the base rates of failure without any obvious incentive to underperform at neuropsychological examination, this point is well understood in the existing literature and not in itself a novel finding. Most neuropsychologists will be familiar with such failures in their clinical practice, and these findings arise in a number of publications, including ones which McWhirter et al omitted to review1 2.
McWhirter et al.’s key conclusion is that in the case of PVT ‘failure rates are no higher in functional disorders than in other clinical conditions’. They then infer from this conclusion that it ‘raises important questions about the degree of objectivity afforded to neuropsychological tests in clinical practice and research’, but they do not expand on this generalisation. In reaching their key conclusion, McWhirter fall into the trap of ‘comparing apples with oranges', and not making reliable and valid comparisons. If we take one of the best documented functional conditions, Psychogenic Non-Epileptic Seizures (PNES), a proper comparison would be to take a group of well documented PNES patients, who only had psychogenic seizures with no lesion pathology discernible, and compare them to a group of patients who had well-documented organic seizures, with lesion pathology clearly defined. As well as matching on usual demographic variables such as age, sex and educational background, the two groups would be carefully matched for duration, frequency and severity of seizures, and for functional disability. It would then be meaningful to compare the performance of the two groups on PVT, and come to any conclusions as to whether rates are higher, lower or the same in the functional group compared to the organic group.
Whilst we have concerns about the lack of rigour in the search methodology, which resulted in an incomplete literature review which pooled data from studies of uncertain quality that may not be comparable, of more concern is that McWhirter and colleagues may lack the knowledge and expertise to interpret these data in a clinically meaningful way. The authors are dismissive of what is a still a developing PVT literature that has achieved a good deal in the last 15-20 years and resulted in an excellent consensus of the requirement to validate neuropsychological test performance with objective tests and symptom-based questionnaires. McWhirter’s et al interpretation of the findings does not provide adequate context to both the latest U.S. and the U.K. effort test / PVT interpretation guidelines, and does not reflect neuropsychological expertise or clinical neuropsychological practice. The authors do not cite the latest U.S. guidelines (Sherman et al, 2020) 3 and the U.K. guidelines are not mentioned (British Psychological Society: Professional Practice Board) 4.
A further key difficulty with the paper is that the authors report the failure rate on individual effort tests of different sensitivities without consideration of the various methodological and statistical techniques that clinical neuropsychologists use to interpret such findings. In clinical practice, a single test score is of little significance, and the authors appear to misunderstand this fundamental point in clinical neuropsychology practice. An effort test profile is obtained by the use of a combination of PVTs of different sensitivities, different cognitive domains, administered throughout the examination, subjected to statistical discrepancy analysis, often binomial probability analysis, placed in the context of positive and negative predictive power and in the context of the patients wider clinical presentation, which could include pain, fatigue, depression and anxiety and related effects on concentration. The paper by McWhirter et al. appears to show no discernible understanding of this statistical and clinical context. Current guidelines clearly identify the need to interpret the results of a failure and provide possible explanations, and it has never been a simple case of regarding pass / fail on a single effort test as diagnostic in its own right.
The authors also seem to misunderstand key concepts, including ‘profile analysis’, which is a technique to prevent misclassification PVT failure as low effort in the presence of bona fide cognitive problems, and this methodology is applicable to tests other than the Word Memory Test, including the TOMM. Failure to understand this and exclude below cut-off performance on effort tests when a ‘severe impairment profile’ is obtained will further distort the McWhirter et al findings, which are derived form a methodology that appears to fall short of the PRISMA standard and resulted in a partial review of the literature, without mention of quality criteria, no second rater and no method to resolve inter-rater discrepancies as would be expected of a well-conducted systematic review. In their Discussion section, the authors also appear to have confounded forced-choice testing, chance-level performance and intentionality.
In their review, McWhirter et al unfortunately group PVTs together and do not appear to readily distinguish between embedded measures and PVTs which have been specifically designed to detect poor cognitive effort. It is performance on the latter tests which need to be accorded greater significance, as it is those which form the basis of conclusions reached by neuropsychologists in their clinical practice when coming to a diagnosis of questionable effort.
In summary, we welcome the contribution of McWhirter et al to an important debate. However, their depiction of neuropsychology using PVTs alone, without clinical context and without methods of analyses to diagnose functional cognitive disorder or ‘malingering’, presents a ‘straw man’ argument because this does not align with what clinical neuropsychologists think or do. A clearer and more extensive review of the literature would have identified the role of PVTs and the complexity of their interpretation, and also allowed readers to have a more balanced understating of their role in clinical practice.
1 Schroeder R, Peck C, Buddin W. et al. (2012). The Coin-in-the-Hand test and dementia: More evidence for a screening test for neurocognitive symptom exaggeration. Cogn Behav Neurol; 25: 139-143.
2 Sieck B, Smith M, Duff K et al. (2013). Symptom validity test performance in the Huntington Disease clinic. Arch Clin Neuropsych; 28: 135-143.
3.Sherman EMS, Daniel J. Slick DK and Iverson GL (2020). Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria Archives of Clinical Neuropsychology 00. 1–30.
4. Assessment of Effort in Clinical Testing of Cognitive Functioning for Adults (2009). The British Psychology Society: Professional Practice Board.
The recently published paper ‘Abnormal pain perception is associated with thalamo-cortico-striatal atrophy in C9orf72 expansion carriers in the GENFI cohort’ by Convery et al.[1] draws attention to a topic of great importance in the field of frontotemporal dementia (FTD) research. In this study, Convery and colleagues investigated differences in pain responsiveness within a group of patients with genetic FTD. Changes in pain responsiveness compared to baseline were captured using a scale designed by the group, and patients were scored from 0-3 (0 = no change, 0.5 = questionable or very mild change, 1 = mild change, 2 = moderate change, 3 = severe change). Within the sample, symptomatic C9orf72 mutation carriers (9/31) experienced greater changes in pain responsiveness than symptomatic MAPT (1/10) and GRN (1/24) mutation-carriers or normal controls (1/181). Within the C9orf72 mutation carriers, these changes were associated with thalamo-cortico-striatal atrophy.
Show MoreThis research brings attention to an important but little-investigated clinical feature of FTD. Changes in pain responsiveness, including both increases and decreases, have now been reported in both sporadic and genetic FTD, along with other somatic complaints.[1–4] However, the changes are not widely captured in either clinical or research settings, and the field lacks standardized and objective measurements to do so. The ability to measure changes in pain responsiveness may be a useful clinical marker to di...
We thank White and colleagues for their correspondence on our article(1) and note many of the observations raised are already addressed by our robust study design and discussed in the original manuscript text. Importantly, we are quite clear throughout that this is a study designed to investigate whether there is higher risk of common mental health disorder in former professional soccer players than anticipated from general population controls.
Undoubtedly, there will be physically active individuals in our general population control group, including a number who might have participated in some form of contact sport. However, we would suggest this does not define our over 23,000 matched general population controls as a cohort of ‘non-elite’ athletes, as proposed by White et al. Instead, we would assert this merely underlines their legitimacy as a general population control cohort for comparison with our cohort of almost 8000 former professional soccer players.
Potential study limitations regarding healthy worker effect, illness behavior in former professional soccer players and use of hospitalization datasets are addressed in detail in our manuscript text. Regarding data on duration of hospital stay and therapy, while these might indeed be of interest in follow-on studies regarding illness severity, we would suggest that they are not immediately relevant to a study designed to address risk of common mental health disorder.
As White et al observe, wh...
Show MoreRussell et al. (1) published a retrospective cohort study with a population of former professional soccer players with known high neurodegenerative mortality. Findings showed that they are at lower risk of common mental health disorders and have lower rates of suicide than a matched general population. These findings are surprising and different from previous studies, which have used first-hand clinical accounts of ex-athletes who have lived with neurodegeneration (1). We suggest there may be reasons for this disparity and welcome critical dialogue with the authors of this research.
Cohort Comparison
Russell et al. has compared their soccer cohort with a matched population cohort. However, the matched cohort may also include those who have experienced repetitive head impacts, such as amateur soccer players, rugby players or boxers. Therefore, the study represents differences of elite versus non-elite rather than sport versus non-sport. While Russell recognises the healthy worker effect (2), it may have a greater influence in this study than presented.
Soccer Stoicism
Show MoreMen’s engagement in health-seeking behaviours has been a long-standing concern in health care and is often attributed to factors such as stigma, hypermasculinity and stoicism (3). Furthermore, working-class sports such as soccer, require the acceptance of pain, suffering, and physical risk, so these players are more likely to ‘suffer in silence’ than the general population (4). Give...
Jacobs et al. investigated the association of environmental factors and prodromal features with incident Parkinson's disease (PD) with special reference to the interaction of genetic factors [1]. The authors constructed polygenic risk scores (PRSs) for the risk assessment. Family history of PD, family history of dementia, non-smoking, low alcohol consumption, depression, daytime somnolence, epilepsy and earlier menarche were selected as PD risk factors. The adjusted odds ratio (OR) (95% confidence interval [CI]) of the highest 10% of PRSs for the risk of PD was 3.37 (2.41 to 4.70). I have some concerns about their study.
Regarding risk/protective factors of PD, Daniele et al. conducted a case-control study to performed a simultaneous evaluation of potential factors of PD [2]. Among 31 environmental and lifestyle factors, 9 factors were extracted by multivariate analysis. The adjusted OR (95% CI) of coffee consumption, smoking, physical activity, family history of PD, dyspepsia, exposure to pesticides, metals, and general anesthesia were 0.6 (0.4-0.9), 0.7 (0.6-0.9), 0.8 (0.7-0.9), 3.2 (2.2- 4.8), 1.8 (1.3-2.4), 2.3 (1.3- 4.2), 5.6 (2.3-13.7), 2.8 (1.5-5.4), and 6.1 (2.9-12.7), respectively. Family history of PD and non-smoking were common risk factors, which had also been reported by several prospective studies.
Regarding smoking, Angelopoulou et al. investigated the association between environmental factors and PD subtypes (early-onset, mid-and-late on...
Show MoreDear sir,
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has become one of the most severe pandemic the world has ever seen. Based on data from Johns Hopkins University, around 26.3 million cases have been detected and around 0.9 million patients have died of COVID-19 globally as of September 04, 2020. The neurological sequelae of COVID-19 include a para/post-infectious, immune or antibody-mediated phenomenon, which classically manifests as Guillain-Barré syndrome (GBS).[1, 2]
We read the systematic review by Uncini et al with great interest. In an instant systematic review, the authors reported 42 patients of GBS associated with COVID-19 from 33 retrieved articles. All of these articles had been reported from 13 developed countries.[3] The authors mentioned regarding the chronology of publication of case reports/series starting from China followed by Iran, France, Italy, Spain and USA which seemed to be related to the track of SARS-CoV-2 infection spread. However, the authors did not discuss why such cases/series had remained under-reported from developing countries. A comprehensive, advanced search of PubMed using the terms ‘SARS-CoV-2’ OR ‘COVID-19’ AND ‘Guillain-Barré syndrome’ on September 04, 2020, led to retrieval of two additional articles from developing countries, one each from Brazil and Morocco.[4 ,5] As of September 04, 2020, Brazil and India had the 2nd and 3rd highest number of COVI...
Show MoreWe read Larrabee and colleagues’ e-letter response to our systematic review on Performance Validity Testing (PVT). Whilst we welcome debate, and we recognize that some clinicians will disagree with our conclusions, we were disappointed that they misrepresented our paper in formulating their response:
1. The authors state “Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States..”. In reality we used the term “effort test” only twice in our paper; in the introduction: “(PVTs), also historically called effort tests” and once in the methods in describing our search terms. By contrast we use the term PVT on 45 occasions.
2. We are concerned that they then go on to misrepresent the results of our review. We found a wide variation in results in different clinical groups and in different tests. We noted that failure rates for some groups and some tests exceeds 25%. We did not conclude that all failure rates were as high as this, but rather that failing a PVT was not a rare phenomenon but was reasonably common in a range of clinical groups.
We presented results to support our conclusion that the PVT literature is problematic with regards to blinding to diagnosis and potential for selection bias.
We also uphold our speculation that an alternate explanation for failure on forced choice tests at above chance cutoffs may result from attentional deficit related to other symptoms. W...
Show MoreMcWhirter et al. (2020) reviewed the published literature on Performance Validity Tests (PVTs), concluding that high false positive (Fp) rates were common in clinical (non-forensic) samples, exceeding 25%. In their discussion, they stated: “The poor quality of the PVT evidence base examined here, with a lack of blinding to diagnosis and potential for selection bias, is in itself a key finding of the review.” They also conclude that the use of a forced choice format with cut scores that are significantly above chance on two alternative forced choice tests (e.g., TOMM), raises questions about the utility of the forced choice paradigm, essentially characterizing these PVTs as “floor effect” procedures. As such, McWhirter et al. then argued that failure at above chance cutoffs represents “functional attentional deficit in people with symptoms of any sort,” rather than invalid test performance due to intent to fail.
Throughout the paper, the authors refer to PVTs as “effort tests”, a characterization that is no longer in use in the United States, in part because PVTs require little effort to perform for persons experiencing significant cognitive impairment (1). Rather, PVTs have been defined as representing invalid performance that is not an accurate representation of actual ability. Continuing to refer to PVTs as “effort tests” allows McWhirter et al. to more easily mischaracterize the tests as sensitive attentional tasks affected by variable “effort” rather than measur...
Show MoreWe read with interest the commentary from Prof Gupta (1). Migraine is a complex and heterogeneous disorder with multifactorial pathogenesis (2). In fact, it is a well-known fact that both genetic and environmental factors are involved in the etiopathogenesis of migraine (2). Conversely, hemiplegic migraine (HM) is a complex monogenic disorder related to a mutation in genes encoding for ion transporters (3). Even if many consider HM as a subtype of migraine, this condition offers insight in migraine pathophysiology, especially in the case of migraine with aura, as well as in other conditions overlapping between headache and epilepsy, such as the so called “Ictal Epileptic Headache”, a new concept defined in the last decade (4–6).
Show MoreOur knowledge on the pathophysiology of both migraine and HM is evolving with new insights coming from the last years (3). However, we partially agree that ….“No systemic influence can explain the characteristic lateralizing headache of migraine, unilateral, bilateral, side-shifting or side-locked” (7,8). Interestingly, new data have come from neurophysiology: hyperexcitability/dysexcitability (5) in migraine has been clearly demonstrated in migraine sufferers with more prominent results especially in migraine with aura (5,9,10). These data could make a reasonable link between the genesis of hyperexcitability/dysexcitability of multisensory cortices, cortical spreading depression (CSD) and the “headache” phase of migraine, mediated by the tri...
We read with interest Kemp and colleagues response to our recent systematic review on Performance Validity Testing (PVT). In response to specific criticisms raised:
1- The searches and data extraction were conducted by one investigator. We agree this is a potential limitation although only if papers were missed, or data was erroneously transcribed, and it can be demonstrated this would have changed the conclusions. Although Kemp and colleagues place great weight on this, the evidence they put forward to support their contention was limited. Of the four citations in their letter, reference 2 and reference 4 were in fact included (see our supplementary tables and our reference 57)(1,2). Reference 3 was, by coincidence, published simultaneously with our manuscript submission and was not available to us(3).
Reference 1 did not fit the terms of our search strategy and was not included although it would have been eligible(4). It was an unblinded study of the ‘coin in hand test’, a brief forced choice screening test for symptom exaggeration, administered to 45 patients with mixed dementias. It found 11% scored at or above a 2 error cut off and the authors proposed a new set of cut offs for interpretation; it was in keeping with our conclusions. We’d be happy to consider any other specific omissions or quality assessment issues not discussed which the authors consider would have altered the conclusions of the review.
2- The authors criticise our understanding...
Show MoreDear Editor
Response to McWhirter et al (2020):
In their article, Performance validity test failure in clinical populations - a systematic review, McWhirter and colleagues (2020) present the ‘base rates’ of performance validity test (PVT) failure (or what are commonly referred to as effort tests) and offer an analysis of PVT performance from their perspective as neurologists and neuropsychiatrists.
As a group of senior practicing clinical neuropsychologists, we are pleased that they have drawn attention to an important issue, but we have significant concerns about the methodology used and with several of the conclusions drawn within the review. We present this response from the perspective of U.K. neuropsychology practice, and as practitioners involved in research and formulating clinical guidance on the use of PVTs. In preparing this response, we were aware of parallel concerns of our U.S. counterparts (Larrabee et al) but we have submitted separate responses due to the word limit.
The systematic review methodology used by McWhirter et al. has resulted in a limited number of papers being included, and there is no indication of the quality of the studies included. All of the literature search and analytic procedures appear to have been undertaken by one person alone, hence there was no apparent control for human error, bias, omission or inaccurate data extraction. Also, it is unclear to us to what extent McWhirter and colleagues had the knowle...
Show MorePages