rss

Recent eLetters

Displaying 1-10 letters out of 505 published

  1. CNS involvement in V30M transthyretin amyloidosis: clinical, neuropathological and biochemical data from another center.

    We read with great interest the recent study by Maia et al., which reports retrospectively central nervous system (CNS) involvement in patients with familial amyloid polyneuropathy associated with Val30Met mutation (ATTR Val30Met FAP)1. This complication was observed in 31% of ATTR Val30Met FAP patients with long survival due to liver transplantation (LT). CNS disorders occurred on average 14.6 years after the onset of TTR- FAP, which is beyond the average life expectancy (11 years) of non- transplanted patients2. The CNS symptoms were classified as negative (ischemia and aura) or positive (epileptic seizures). Magnetic resonance imaging (MRI) could not be performed in their population of patients; however, amyloid deposition from the meninges to the brain parenchyma was proposed as the neuropathological hallmark of those clinical manifestations, mainly based on historical necropsy studies1. Indeed, LT does not preclude amyloid production by the retinal pigment and choroid plexus, thus ocular and CNS involvement many years after LT is not an unexpected observation3. Moreover, atypical cases ATTR Val30Met FAP presenting with early CNS manifestations have been described4.

    In our center, a similar experience with de novo CNS dysfunction in long survivors with ATTR Val30Met FAP has been identified. From 83 patients with a disease duration longer than 10 years (mean 15.6, SD 5.1), in whom 91.5% (76/83) were submitted to LT (mean duration after LT of 10.8 years), 16.9% (14/83) had some CNS manifestation. The mean interval between disease presentation and CNS manifestation was 16.9 years (SD 5.5). In the group of affected patients, 64% (9/14) had a transitory ischemic accident-like episode, 7% (1/14) ischemic stroke, 7% (1/14) brain hematoma, 42.8% (6/14) epileptic seizures, 7% (1/14) a psychosis-like event and 92.8% (13/14) cognitive impairment (11 with mild cognitive impairment criteria and 2 with minor cognitive impairment) although no patients had criteria for dementia. Our patients underwent brain CT scan, EEG, neurovascular workup (blood tests, EKG, transcranial doppler, carotid ultrasound and echocardiogram) and formal neuropsychological evaluation including Mini Mental State Examination (MMSE), tests of attention, memory and executive functions. Brain MRI (without gadolinium) was performed in two patients with seizures, which excluded ischemic or hemorrhagic lesions. Neuropsychological assessments disclosed attention and initiative deficits as the most critical findings, suggesting dysexecutive (frontal) involvement. A 40 year-old male, in whom LT was performed at the age of 26 (three years after the disease onset), had an acute right parieto-occipital hematoma, two years after his initial symptoms of epileptic seizures. During surgical decompression, a cerebral biopsy was done that showed TTR amyloid deposition in meningeal vessels and cerebral parenchyma with no beta A4 amyloid deposition (Figure 1). This observation was in agreement with the neuropathological findings described by Maia et al1. In general, our results support the detailed report by Maia et al1. In our population, seizures were somewhat more frequent, and MRI could be performed in two of those patients, excluding vascular changes. Furthermore, neuropsychological testing disclosed mild executive impairment in almost all patients, which raises the hypothesis of a predominant frontal amyloid deposition. Many questions remain open and should be addressed in future studies, in particular the risk of dementia and seizures in this group of patients. This has critical implications regarding medical decision towards LT in TTR-FAP patients.

    REFERENCES

    1. Maia LF, Magalh?es R, Freitas J et al. CNS involvement in V30M transthyretin amyloidosis: clinical, neuropathological and biochemical findings. J Neurol Neurosurg Psychiatry 2015; 86:159-67. 2. Coutinho P, Martins da Silva, A, Lopes Lima J, Resende Barbosa A. Forty years of experience with type I amyloid neuropathy. Review of 483 cases. In: Glenner G, Costa P and de Freitas A. (eds). Amyloid and Amyloidosis 1980. Amsterdam: Execerpta Medica, pp. 88-98. 3. Sandgren O, Kjellgren D, Suhr O. Ocular manifestations in liver transplant recipients with familial amyloid polyneuropathy. Acta Ophthalmol 2008; 86:520-524. 4. Herrick MK, DeBruyne K, Horoupian DS, Skare J, Vanefsky MA, Ong T. Massive leptomeningeal amyloidosis associated with a Val30Met transthyretin gene. Neurology 1996; 47:988-992.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  2. Cerebellar ataxia: lessons from research

    I read with interest the review by Tada et al. [1], which, based on the current knowledge on the cerebellar systems, propose an interesting framework to interpret cerebellar ataxias in clinical settings. The Authors classify cerebellar ataxias into two main categories: those with a loss of Purkinje cells, translating into a malformation of internal models; and those with a disturbance of afferent systems, causing a mis- selection of internal models. From a clinical point of view, the first type of deficit, as observed in SCA31 and SCA6, results in decomposition and dysmetria. In addition, the loss of Purkinje cells may initially affect just the quality, and not the quantity, of internal models, and new internal models could be reconstructed using the residual circuits. This could explain why patients with pure cerebellar ataxia initially respond well to rehabilitation. Concerning the second category, a further classification differentiates the disturbances of the corticopontocerebellar system, such as in MSA-C and SCA2, from those of the spinocerebellar system, such as in SCA1, SCA3, FRDA and AOA1. In all of them there is dysmetria and dysrhythmia but, while the pontocerebellar disturbance affects the internal model selection for well-trained limb movements, such as writing or playing an instrument; disturbances of the spinocerebellar system affect actions requiring continuous feedback from the periphery, such as stance and gait. Finally, a third type of ataxia, such as the DRPLA, involves the efferent system (deep nuclei), resulting in a deficit of the overall cerebellar function.

    This classification has the great value of building a bridge between basic research and clinical knowledge, and it's simple enough to be implemented in everyday clinical practice, but it deserves some comments.

    First, as already acknowledged by the Authors, the pathology of degenerative ataxias is much more complex than the proposed schematization, as there is usually a combined degeneration of various systems and the very initial pathological alterations in many cases are still controversial [2].

    Second, the proposed classification is based on the classical view that the cerebellar cortex has a uniform structure and that a unique neural computation (the so called "cerebellar transform") is performed throughout it [3]. This implies that the functional role of the different cerebellar areas is dictated by its input and output connections and that, once known these connections, the impact of a lesion can be predicted relatively easily. Nowadays, it looks that this traditional concept requires sophistication. In fact, as underlined in a recent review by Cerminara et al. [4], cytoarchitectural and physiological variations have been identified in the cerebellar cortex, supporting the non-orthodox hypothesis that the cerebellar cortex, and therefore its information processing, is not uniform. In this case, classifying ataxias just on the basis of the anatomical regions affected could not be enough to predict the functional deficit. Interestingly, it seems that, the configuration of Purkinje cells death in neurodegeneration is not random, being probably genetically determined [4]. Therefore, a deeper knowledge of these patterns and at the same time of the cerebellar cortex computations could lead to a more accurate phenotypic classification of ataxias.

    Third, Tada et al. [1] focused their observations on motor symptoms. As the cerebellum is nowadays known to have also non-motor roles, a similar classification regarding cognitive functions in ataxias would be of interest, even if, again, the complexity of neurodegeneration would make this task quite arduous [5].

    Finally, formally exploring the motor differences between these categories, with an accurate registration of movement kinematics and a motor learning task, would be of interest. This would require testing subjects in an early stage of disease, to evaluate if motor performance and motor learning vary across the different neurodegeneration patterns and with disease progression. This approach could finally lead to development of new rating scales but also to an improvement of actual neurorehabilitation approaches.

    References

    1. Tada M, Nishiwaza M, Onodera O. Journal of Neurology Neurosurgery and Psychiatry 2015; 0:1-7. doi: 10.1136/jnnp-2013-307225

    2. Fratkin JD and Vig PJS. Handbook of Clinical Neurology 2012; 103:111-125.

    3. D'Angelo E, Casali S. Frontiers in Neural Circuits 2013; 10:116. doi: 10.3389/fncir.2012.00116. eCollection 2012.

    4. Cerminara NL, Lang EJ, Sillitoe RV, Apps R. Nature Reviews Neuroscience 2015; 16:79-93.

    5. Burk K. Cerebellum 2007; 6(3):280-6.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  3. Azathioprine and cyclosporine for myasthenia gravis in pregnancy

    We read the recently published UK guideline for treating myasthenia gravis (MG) patients in pregnancy with great interest [1]. As stated by the workgroup in their article, few studies are available concerning drug safety in pregnancy in these patients. In our experience, most drugs indeed carry a very small risk in the treatment of MG during pregnancy. However, based on additional literature we would advise some caution regarding use of azathioprine and cyclosporine during pregnancy.

    Cleary and others described pregnancy outcomes in 476 women using AZT during early pregnancy, mostly for inflammatory bowel disease [2]. A significant increase in atrioventricular septum defects was found (OR 3.18), pre-term birth, low birth weight and small stature were also more significant. A trend towards an increase in all birth defects was found (OR 1.41, 95% CI 0.98-2.04) was found in women with IBD and azathioprine, compared to IBD patients without azathioprine.

    For cyclosporine, a meta-analysis of multiple studies shows a non- significant increase of malformations (OR 3.83, 95% CI 0.75-19.6) and borderline significant prematurity (OR 1.52, 95% CI 1.00-2.32) [3]. Although inconclusive, the limited data available suggests at least some caution. Overall, prematurity and low birth weight are reported more frequently in women exposed to cyclosporine, although it is unclear if these are related to cyclosporine or maternal disease [4].

    In our opinion, both azathioprine and cyclosporine cannot be considered completely safe. Although both are unlikely to be major teratogens, we consider both to be relatively contra-indicated. Our recommendation would be

    1) to discuss the small risks associated with use of azathioprine and cyclosporine and

    2) to discontinue both immunosuppressants from planned conception and during the first trimester in patients with stable disease and no or mild weakness.

    References

    1. Norwood F, Dhanjal M, Hill M, et al. Myasthenia in pregnancy: best practice guidelines from a U.K. multispecialty working group. J Neurol Neurosurg Psychiatry. 2014 May;85(5):538-43.

    2. Cleary BJ, Kallen B. Early pregnancy azathioprine use and pregnancy outcomes. Birth Defects Res A Clin Mol Teratol. 2009 Jul;85(7):647-54.

    3. Bar Oz B, Hackman R, Einarson T, et al. Pregnancy outcome after cyclosporine therapy during pregnancy: a meta-analysis. Transplantation. 2001 Apr 27;71(8):1051-5.

    4. Paziana K, Del Monaco M, Cardonick E, et al. Ciclosporin use during pregnancy. Drug Saf. 2013 May;36(5):279-94.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  4. Re:How accurate is the record linkage?

    We thank Dr. Voci for his comments[1] on our article.[2] As described in the paper, our data comprise Hospital Episode Statistics (HES) obtained from the English national Health and Social Care Information Centre (HSCIC) and mortality record abstracts obtained from the Office for National Statistics. All data from both sources were anonymised to the same standard by encryption of personal identifiers before being supplied to the Oxford study team for record linkage studies.

    The main data item used in the algorithm for linkage between successive HES records for the same person was "HES ID", an encrypted unique personal identifier generated by HSCIC. HES ID was present for 100% of the HIV HES records, 100% of the reference cohort, and 100% of the multiple sclerosis HES records. For the linkage of HES records to death records, probabilistic matching was used, based on encrypted NHS number, encrypted postcode, encrypted date of birth, and sex.

    To illustrate the quality of the matching, and further to Dr. Voci's suggestion, we have followed up the HIV cohort for tuberculosis, a disease known to be associated with HIV. The rates in the HIV cohort were very substantially elevated compared with those in the reference cohort comprising people without HIV: there were 539 observed and 33.8 expected cases of tuberculosis in the HIV cohort; and 4763 observed and 5268.2 expected cases of tuberculosis in the reference cohort. The rate ratio, standardised for age and other factors as described in our original article,2 was 17.6 (95% CI 16.1 to 19.3). Restricting the outcomes to death records only, the standardised rate ratio was also very high: there were 34 observed and 1.4 expected cases of tuberculosis in the HIV cohort, and 496 observed and 528.6 expected cases of tuberculosis in the reference cohort. The standardised rate ratio was 25.5 (95% CI 17.5 to 36.17), i.e. there was a 25-fold increased death rate from tuberculosis in the HIV cohort. We have confidence in the dataset and the methods used to match and link records in it.

    References

    1. Voci C. How accurate is the record linkage? J Neurol Neurosurg Psychiatry

    2. Gold J, Goldacre R, Maruszak H, et al. HIV and lower risk of multiple sclerosis: beginning to unravel a mystery using a record-linked database study. J Neurol Neurosurg Psychiatry 2014 Aug 4 [Epub ahead of print] doi: 10.1136/jnnp-2014-307932

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  5. How accurate is the record linkage?

    This is an interesting study. However, I am somewhat concerned whether the record linkage could be appropriately performed. In most healthcare information systems, HIV patients characteristics are protected. Did the authors have enough information on the patients to perform probabilistic matching? It would be of interest if the authors could show that the matching is adequate by showing that there is not a general reduced risk of developing other autoimmune diseases and confirming known HIV associations, for example with TB.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  6. Response to "Incident dementia and cognitive decline in patients with stroke/TIA" by Professor Tomoyuki Kawada

    Professor Kawada expresses three concerns regarding the outcomes of our study into how cognitive deficits progress in the years following a stroke [1]. Firstly, he cites Rajan et al. [2] as finding that cognitive decline was greater after stroke than before stroke, and suggests that our study should have considered pre-stroke cognitive decline. How doing so would have affected our outcomes or their interpretation is unclear. We found greater decline in MMSE scores for stroke patients than for a well- matched healthy control group. This complements the findings of Rajan et al., whose cognitive measure was a composite of four tests that included the MMSE. We also found that 23% of our patients had a further stroke during follow-up, and which was associated with accelerated cognitive decline. This is also in keeping with the findings of Rajan et al., but goes further to highlight a need to consider how overall outcomes are affected by including patients with repeat strokes in analyses of post- stroke cognitive decline. The second concern of Professor Kawada involves a recommendation that we conduct sensitivity analyses in light of a report of warfarin being associated with higher rates of haemorrhagic stroke than dabigatran etexilate, particularly in Asians as compared with non-Asians [3]. This seems beyond the scope or aims of our study. Professor Kawada's third concern involves our observation of increased rates of dementia in stroke patients, and he seems to suggest that pre- stroke cognitive functioning must be considered when making such claims. This is despite his further reference to studies reporting an independent effect of stroke on dementia, including one we also cite [4], and our exclusion of patients with a diagnosis of dementia prior to their stroke. The purpose of our study was to examine how cognitive deficits progress in the years following a stroke, which we did by comparing stroke patients, either with or without a further stroke, against a suitable control group. We believe that extending our analyses to include pre-stroke cognitive functioning and effects of incident stroke rates would be additional purposes, and that not including these should not alter the interpretation of our results.

    References

    1 Sachdev PS, Lipnicki DM, Crawford JD, et al. Progression of cognitive impairment in stroke/TIA patients over 3 years. J Neurol Neurosurg Psychiatry 2014 Mar 21 [Epub ahead of print].

    2 Rajan KB, Aggarwal NT, Wilson RS, et al. Association of cognitive functioning, incident stroke, and mortality in older adults. Stroke 2014;45:2563-7.

    3 Hori M, Connolly SJ, Zhu J, et al. Dabigatran versus warfarin: effects on ischemic and hemorrhagic strokes and bleeding in Asians and non -Asians with atrial fibrillation. Stroke 2013;44:1891-6.

    4 Savva GM, Stephan BC. Epidemiological studies of the effect of stroke on incident dementia: a systematic review. Stroke 2010;41:e41-6.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  7. Incident dementia and cognitive decline in patients with stroke/TIA

    Sachdev et al. conducted a 3-year follow-up study to know the progress of cognitive deficits after stroke or transient ischemic attack (TIA) (1). The authors gathered 183 stroke/TIA patients and 97 healthy controls, and concluded that cognitive decline in post-stroke patients was not greater than in controls, except for verbal memory, although rates of incident dementia were 5.9% per year in patients and 0.4% in controls, respectively. I have some queries on their study outcome.

    First, Rajan et al. conducted a cohort study to check bidirectional causality of the association between incident stroke and cognitive function in 7217 older adults with 3-year intervals (2). They concluded that baseline cognitive function was associated with incident stroke, and the level of cognitive decline was 1.9-fold higher after stroke, compared with periods before stroke. This study shows that the level of cognitive decline should also be compared in stroke/TIA patients before and after events.

    As the second concern, there is a report that effects of different types of anticoagulant medication for stroke subtype differ and they are affected by races (3). I recommend the authors conducting sensitivity analysis with use of stroke subtype such as ischemic and hemorrhagic stroke, by considering medication, which would be possible by the enough number of incident strokes.

    As the third concern, Savva et al. reported a systematic review of the effect of stroke on incident dementia (4). In their study, stroke increased 2-folds risk of incident dementia in the older population, and the increase could not be explained by demographic or cardiovascular risk factors or by pre-stroke cognitive decline. Relating to their study, Dregan et al. estimated the occurrence in post-stroke dementia as a function of pre-stroke cognitive status and incident stroke, and concluded that stroke and pre-stroke cognitive impairment were independently associated with increased risk of post-stroke dementia (5). These reports show that the level of cognitive function should be checked in stroke/TIA patients before and after events. I understand that the study design by Sachdev et al. can also evaluate the progress of cognitive impairment in stroke/TIA patients with adjustment of confounders. My recommendation would be achieved by a follow-up study in large number of subjects without stroke or TIA as a baseline study.

    References

    1 Sachdev PS, Lipnicki DM, Crawford JD, et al. Progression of cognitive impairment in stroke/TIA patients over 3 years. J Neurol Neurosurg Psychiatry 2014 Mar 21. doi: 10.1136/jnnp-2013-306776

    2 Rajan KB, Aggarwal NT, Wilson RS, et al. Association of cognitive functioning, incident stroke, and mortality in older adults. Stroke 2014;45:2563-7.

    3 Hori M, Connolly SJ, Zhu J, et al. Dabigatran versus warfarin: effects on ischemic and hemorrhagic strokes and bleeding in Asians and non -Asians with atrial fibrillation. Stroke 2013;44:1891-6.

    4 Savva GM, Stephan BC; Alzheimer's Society Vascular Dementia Systematic Review Group. Epidemiological studies of the effect of stroke on incident dementia: a systematic review. Stroke 2010;41:e41-6.

    5 Dregan A, Wolfe CD, Gulliford MC. Does the influence of stroke on dementia vary by different levels of prestroke cognitive functioning?: a cohort study. Stroke 2013;44:3445-51.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  8. Rapid cognitive decline in attention, executive function and memory: the gold standard for delirium?

    I endorse the authors' use of multiple information sources to identify prevalent delirium in medical and surgical inpatients 1-2: family and other informants, junior and senior physicians, medical case notes and nurses. Although the authors did not use the model I developed, I suggest that their paper is about distinguishing rapid cognitive decline (RCD, one form being delirium) from slow cognitive decline (SCD) in dementia and mild cognitive impairment (Diagnostic and Statistical Manual of Mental Disorders or DSM-V major minor neurocognitive disorder). Until each cognitive domain is measured in RCD patients by a 6-minute battery of non- copyrighted paper and pencil neuropsychological tests, I prefer the umbrella term rapid cognitive decline (RCD) over RCD subtypes such as delirium. My prospective randomised controlled trial, Central Coast Australia Delirium Intervention Study (CADIS), examined a new RCD phenotype. CADIS enrolled 130 Confusion Assessment Method (CAM) positive elderly who had at least 25% relative decline in attention or executive function and stringent measures to exclude behavioral and psychological symptoms of dementia (BPSD) 3-5. RCD is classified by cognitive domain into five main subtypes: 1) Delirium: RCD involving mainly attention, executive function, word-list memory; 2) Transient global amnesia: RCD involving memory and orientation with sparing of other domains; 3) Dysphasia: RCD involving mainly language, often from acute ischemic stroke (AIS); 4) Executive dysfunction RCD due to AIS; 5) Visuospatial impairment RCD from AIS. RCD in CADIS was associated with rapid cognitive recovery (RCR). I wish to focus on seven issues in the author's methods, results and conclusions which lead to false positive delirium diagnosis in severe hearing impairment, dementia and behavioral and psychological symptoms of dementia (BPSD). 1) While spatial span forward (SSF) is an excellent test for inattention, months of the year backwards (MOTYB) reflects both attention and executive function. If MOTYB resolved from say 2 / 12 months accurate on admission to 12 / 12 months at discharge this rapid recovery this would prove that abnormal MOTYB was due to delirium rather than dementia. If MOTYB did not change then an abnormal score on admission should not be used as evidence of delirium; 2) Digit span forward (DSF) in CADIS proved highly robust and sensitive to change: 6-DSF fell by 54% prior to admission and improved by 54% from admission to recovery. DSF is quicker and easier to measure than SSF and MOTYB. 3) Investigators did not exclude severely deaf subjects or use amplifiers with headphones (cost US$120) for the deaf to ameliorate hearing deficits, leading to false positive attribution to delirium. 4) The opening sentence of the discussion section decries underdetection of delirium, which I agree is a major problem. The greater problem of overdiagnosis, however, receives almost no attention in the medical literature - false positive diagnosis of delirium in dementia with or without BPSD. Behavioral decline is counted falsely as disorganized thinking. In many countries with national health services if a GP wishes to admit a dementia patient with behavioral problems she or he will insist on acute onset and new confusion knowing that if the patient were labelled as BPSD the hospital physicians would recommend management in the community. Furthermore there is a certain prestige in diagnosing delirium which is absent with BPSD. 5) The questions used by geriatricians to identify disorganised thinking such as "Would a stone float on water?" or interpreting a proverb are impaired in moderate to severe dementia and not specific for delirium. 6) The authors state in the last paragraph of the introduction that their aim was to determine if SSF and MOTYB would predict delirium by DSM-IV. While their findings confirmed this, the data they present is silent on reversibility and magnitude / rate of cognitive decline and cognitive recovery. 7) CAM item 4 in table 1, altered level of consciousness during the interview could have arisen from factors other than delirium such as sedative or antipsychotics for BPSD, sleep deprivation from acute symptoms or tests after midnight or temporary lack of CPAP in obstructive sleep apnoea (CPAP often not taken in ambulance to emergency department). In conclusion I believe that the speed of cognitive decline (25% per 24 hours) and the speed of cognitive recovery are the gold standards for delirium and other forms of RCD. This parallels the time course of many disease pairs: acute kidney injury compared to chronic kidney disease or asthma compared to chronic obstructive pulmonary disease. The suggestion that certainty in diagnosis wait 2-3 days after hospital admission to measure recovery is no different than the waiting period to confirm blood stream infection or to wait for tests not available in medium or small hospitals after midnight such as MRI imaging to show a new cerebral infarct on T2 or FLAIR when initial CT scan shows no acute changes. It is time for logic-based medicine to correct evidence-based medicine (EBM) when EBM started on an imperfect delirium phenotype from DSM which was perpetuated in 1990 with CAM. References 1. O'Regan NA, Ryan DJ, Boland E et al Meagher D. Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry 2014;85:1122-1131 2. Ryan DJ, O'Regan NA, Caoimh RO et al, Paula T Trzepacz, David Meagher. Delirium in an adult acute hospital population: predictors, prevalence and detection. BMJ Open 2013;3:e001772 3. Regal P. Delirium Reversibility and Instrumental Activities of Daily Living. Geriatrics and Gerontology International October 2014;14:in press 4. Regal, P. Serial Montreal Cognitive Assessments demonstrate reversible cognitive impairment in patients with acute transient ischemic attack and minor stroke. Stroke 2014;45:e193 5. Regal P. Need for new methods to study delirium. Internal Medicine Journal 2013;43:1053-4

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  9. Control arms in brain stimulation trials and experiments

    The JNNP recently published an interesting study which used magnetic stimulation to the spine of people with Parkinson's disease, and showed a promising improvement in camptocormia compared to a group receiving 'sham' stimulation [1]. In an Editorial Comment, Caslake [2] argued that the participants in this particular study may not have been sufficiently blinded as to the group to which they had been assigned. Although the stimulation used by Arii and colleagues was delivered to the spine and not to the head (trans-spinal magnetic stimulation, TSMS, rather than transcranial magnetic stimulation, TMS), we believe there are general principles of experimental design that are of relevance to studies using these technologies.

    We have previously argued that it is almost impossible to create a convincing placebo control for experiments using stimulation such as TMS or transcranial current stimulation (tCS) [3]. TMS by its nature is a multisensory experience: driving a current through the hand-held coil creates the intended magnetic pulse, but as a side-effect creates a clicking sound and, depending on the stimulation target, a muscular twitch or an illusory flash of light in the participant's vision. One option for creating a sham condition is to use a "sham coil" which reproduces the clicking sound but does not reproduce the side-effects. If an individual, na?ve participant does not participate in both active and control arms of the study, it can usually be argued that the presence or absence of side-effects does not give the participant enough information to know which treatment they had received. However in the study by Arii et al. [1], and in many other reported in the literature, the participants crossed over so that each person experienced both conditions. This means that each participant was able to compare their second treatment to their first, so reducing the blinding. In the case of the study by Arii et al., we note first that their Table 2 suggests a tell-tale additional improvement in the group that received real stimulation after sham, and second that the authors do not report subjective comments from the participants except that two participants experienced discomfort.

    The poverty of the control arm in stimulation therefore introduces a challenge when designing experiments and trials to use these techniques. One solution, as we mentioned, might be to divide participants into groups who separately receive either real or sham stimulation. Another solution could be to introduce an alternative stimulation target which is hypothesised not to be involved in the behaviour of interest, although we note that this is a difficult and somewhat subjective alternative [3]. At a more philosophical level it could also be argued that so-called 'non- invasive' [4] techniques such as TMS and tCS are inherently imperfect for creating control and blinding conditions in awake and aware participants, and therefore to qualify any reported results by adding participants' comments and other observations so the degree of unblinding can be assessed.

    In conclusion, we agree with the comments of Caslake [2] that proper experimental science requires a proper experimental control, however it is important to acknowledge that proper blinding is very difficult to achieve in many situations. Our use of non-invasive stimulation requires an ethical balance between the need to preserve the scientific integrity of an experiment or trial, and the desire to advance treatments for debilitating conditions. We therefore advocate a culture of open reporting of all relevant circumstances in an experiment.

    Dr Nick Davis, Swansea University, UK Dr Martyn Bracewell, Bangor University, UK.

    References

    1. Arii Y, Sawada Y, Kawamura K, et al. Immediate effect of spinal magnetic stimulation on camptocormia in Parkinson's disease. Journal of Neurology Neurosurgery and Psychiatry 2014;85:1221-26 doi: 10.1136/jnnp- 2014-307651 . 2. Caslake R. Dif?culties with control arms in repetitive magnetic stimulation studies. Journal of Neurology Neurosurgery and Psychiatry 2014;85:1182 doi: 10.1136/jnnp-2014-307906.

    3. Davis N, Gold E, Pascual-Leone A, et al. Challenges of proper placebo control for noninvasive brain stimulation in clinical and experimental applications. European Journal of Neuroscience 2013;38(7):2973-77

    4. Davis N, van Koningsbruggen M. 'Non-invasive' brain stimulation is not non-invasive. Frontiers in Systems Neuroscience 2013;7:76 doi: 10.3389/fnsys.2013.00076.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response
  10. Reply to Letter: Which target is best for patients with Parkinson's disease? A meta-analysis of pallidal and subthalamic stimulation

    We are pleased that Dr. Cyron agrees with our statement. Although our meta-analysis revealed that depression was more frequent in STN DBS relative to GPi DBS, additional randomized trials which investigate long- term outcome including UPDRS, quality of life and adverse events are required to determine which target is more appropriate for patients with PD. We hope our findings and suggestions would be useful for future studies which provide valuable evidence as meta-analyses helped to establish various treatment options historically.

    Conflict of Interest:

    None declared

    Read all letters published for this article

    Submit response

Podcasts
Visit the full archive of podcasts for JNNP here >>

Free sample
This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of JNNP.
View free sample issue >>

Don't forget to sign up for content alerts so you keep up to date with all the articles as they are published.