This review argues that even with the tremendous advances in diagnostic neuroimaging that the clinical skills involved in clinical neurology (ie, history, examination, localisation and differential diagnosis) remain key. Yet a number of recent audits suggest that large numbers of patients are failing to be assessed properly with a risk of patient harm, costly, unnecessary or inappropriate investigations, or delayed diagnosis. We review some of the reasons why patients are not being assessed properly neurologically, in part as many doctors have limited neurological exposure and are hence neurophobic. We propose that a solution to these issues centres around ensuring that a core set of basic neurological skills is taught at an undergraduate level, whereas higher level skills, such as the use of heuristics, are taught at postgraduate level.
- Clinical Neurology
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
“The nervous system is almost entirely inaccessible to direct examination. The exceptions to this are trifling. The termination of one nerve, the optic can be seen within the eye.”1 So stated, William Gowers in 1888 in his introduction to general symptomatology of neurological diseases.
Clearly investigations were largely non-existent in the late 19th century, which is why contemporary descriptions from Gowers and other founders of neurology, such as Charcot, focused on the importance of history and examination. Yet in one recent UK study, it was estimated that 33% of inpatients referred to neurologists from physicians could not recollect being examined with a tendon hammer and 48% said that they had not been examined with an ophthalmoscope (compared with only 4% who could not recall being examined with a stethoscope) prior to referral to neurology.2 Although this might make Gowers roll in his grave at the state of UK neurology, a US study was even worse. Of 350 patients in an emergency department, whose symptoms would necessitate ophthalmoscopy (predominantly headache), only 14% of patients were examined with an opthalmoscope.3
With this background, this review argues that even in the 21st century we need to remind ourselves of the importance of such principles of clinical reasoning and examination, the so-called ‘méthod anatamo-clinique’ of Charcot, for a number of reasons:
safe patient care and rapid diagnosis
ensuring appropriate timely investigations are requested
Finally, having outlined the problems that arise as a result of inadequate neurological examination, we set out some solutions—all of which centre around improvements in neurological training—both by teaching an agreed core set of neurological examination skills, and we propose that subspecialty neurological associations develop basic standards of examination skills via consensus.
History and examination both matter, though the former trumps the latter (probably)
Different from all other medical specialties, save perhaps psychiatry, the neurologist is heavily dependent on listening to and interpreting what the patient tells us… If you don't know what is happening by the time you get to the feet you are in real trouble
Jerome M Posner, 20134
One cannot occur without the other, and as Posner has intimated what one looks for in the neurological examination is determined by the history.
History taking is an art. Medical students are taught that by taking an appropriate history the patient's diagnosis will be revealed. In practice, the art of history-taking takes years to perfect using a variety of techniques to allow the patient to tell their story. There is no uniform definition of what time is required for a neurologist, compared with a physician or a general practitioner, to develop such skills—the seminal paper by Ericsson5 (popularised by Malcolm Gladwell in his book Outliers) suggests an average of 10,000 h deliberate practice to become expert might give some guidance from other fields.
Studies from 20–30 years ago performed in the general medical outpatient setting in ambulatory patients noted that the history alone provided the diagnosis in 76–82% of cases.6 ,7 More recently, Paley et al assessed the value of basic clinical methods in diagnosing patients requiring admission to general medicine through the emergency department. They found that history was the key element in formulating a diagnosis either alone (approximately 20% of all diagnoses) or in combination with physical examination (additional 40% approximately). Ultimately, 90% of patients were correctly diagnosed at presentation through a combination of history, examination and basic tests.8
to explain symptoms, assist localisation and diagnosis, and exclude abnormality
to demonstrate understanding and relevance to the patient
to evaluate function (eg, consciousness, swallowing, walking)
to evaluate change (eg, Glasgow coma scale, strength)
to seek avenues of treatment (eg, spasticity, weakness)
Although history taking is crucial to establishing a diagnosis, Peterson et al7 showed that in a third of patients physical examination helped to increase physicians’ confidence in the diagnosis made after history taking measured on a scale of 1–10 (7.1–8.2). As detailed above, Paley et al8 found that examination in addition to history taking increased diagnostic yield, but examination alone only led to a diagnosis in 1% of patients. It is important to note that it is difficult to separate history taking completely from examination. Tremor, abnormal gait, facial weakness, general habitus and dysarthria, for example, can be observed prior to embarking on the consultation.
In one study of 200 community neurological patients (the majority of whom were outpatients), in 28.5% of cases, the examination helped establish the diagnosis, though in 10% of the cases, the examination findings were misleading, emphasising the importance of taking the examination findings in their clinical context.11
It is clear that history and examination cannot be taken in isolation; they are not separate entities but two overlapping pieces of the clinical puzzle that work synergistically to enable the clinician to reach a diagnosis.
What are the components of the neurological examination, which are essential and in what context?
If we set about teaching medical students to examine all patients with a complete neurological examination, are we at risk of making them neurophobic? After all, one group concluded from a review of the literature and the American Academy of Neurology clerkship core curriculum no less than 94 individual elements of the neurological examination.12 Clearly if an experienced neurologist can examine a patient effectively in 3 min,13 how can we teach what really matters to our students? There are two approaches.
The essential neurological examination in the patient unlikely to have findings
First, where the history suggests that the patient is unlikely to have findings on examination, a Canadian group provided some very useful data,12 which was then replicated by a Brazilian group.14 Essentially, using the Delphi method (a methodology for reaching a consensus as objectively as possible on a complex problem), they asked neurological colleagues (24) and medical students (168) to list the essential aspects of the neurological examination from a list of 44 items that they would use in such a patient at least 80% of the time. To assess whether the results were more generalisable, they solicited the views of 38 other Canadian neurologists for validation. The Brazilian group did a similar, though smaller, study, with similar results.14 The resultant 22 core items identified by the Canadian group covered fundoscopy, light reflex, visual fields, pursuit extraocular movements, facial muscles, tongue, gait, tandem gait, pronator drift, rapid alternating movements of arms, finger-nose, tone in arms and legs, power in arms and legs, reflexes (biceps, brachioradialis, triceps, patellar, Achilles, plantar) and light touch.
A different situation arises in the patient with a focal lesion, with few physical signs—how sensitive and specific is the neurological examination? A New Zealand group studied this in 46 patients with a focal cerebral hemisphere lesion without obvious focal signs and 19 controls subjects with normal imaging in a prospective manner.15 The examiners were blinded to the history, imaging result and diagnosis but were told the patient's age and handedness. They found that the upper limb tests with the highest sensitivities for focal lesion detection were finger rolling (0.33), power (0.30), rapid alternating movements (0.30), forearm rolling (0.24) and pronator drift (0.22), with all these tests having a specificity of 1.0. In combination, these tests detected a focal lesion in 50% of the patients with such problems. Overall, neurological examination detected a focal lesion in 61% of patients with imaging-proven focal disease.
The discrepancy between previous studies stating that the use of forearm or finger rolling testing has a high sensitivity for focal cerebral hemisphere localisation16 ,17 and the significantly lower sensitivities in the New Zealand study is probably related to patient selection for the latter study, in which patients with obvious signs were excluded. Interestingly, the patients with focal lesions had been examined by a neurologist or neurosurgeon prior to recruitment and signs of focal disease were only detected in one-third.15 Clearly, the decision to perform imaging in patients is not made on examination findings alone. Nonetheless, it would be interesting to see further studies in different populations and indeed the use of such techniques by non-neurologists.
The focused clinical examination
This is what most experienced neurologists actually do—a clinical examination that is tailored to the individual clinical problem. Based on the work of Moore et al12 and Lima et al,14 using a scenario where a patient is unlikely to have findings on examination, perhaps this could be expanded to use the same Delphi methodology to establish which parts of the neurological examination should be performed for a variety of suspected conditions. Some examples:
suspected stroke (where an overly lengthy and detailed neurological examination could adversely affect outcome when a rapid sufficient neurological examination to assess safety of thrombolysis may be all ie, required);
cognitive disorders (eg, the assessment of a patient with suspected Alzheimer's disease may differ from that with suspected frontotemporal dementia (where the use of an MMSE (mini-mental state examination) would be a very poor discriminator);
suspected coma (where more detailed analysis of brain stem function, and signs of suspected meningism would be essential);
neuromuscular disorders (examination in suspected myasthenia: assessment of ptosis, neck flexion and limb weakness are very different from the usual neurological examination);
suspected epilepsy (where the essential neurological examination described above12 may well suffice, but other non-neurological aspects may take a greater importance, eg, blood pressure in syncope, skin involvement in tuberose sclerosis);
movement disorders (eg, assessments of tremor or dystonia, asking the patient to write or hold a drink, assessment of postural instability).
Such an assessment of the actual parts of the neurological examination that are used by different neurologists would be useful as it would lead both to greater consistency in particular subspecialties (eg, compare the widespread use of the expanded disability status scale by the multiple sclerosis community18 compared with the plethora of clinical examination tools used by cognitive neurologists), but also help reduce clinical tests of useless value (eg, Weber's and Rinne's test in the assessment of hearing19). Hopefully this could help our trainees understand why neurologists use heuristics or shortcuts in their decision making to help them through complex clinical information and formulate a diagnosis efficiently.20 A heuristic is a decision strategy that ignores part of the available information and focuses on the few relevant predictors. In essence, an expert clinician, without knowing it, uses a Bayesian approach to probability assessment that starts with an initial probability estimate that is based on one's knowledge of disease prevalence or from one's experience,21 with an awareness of potential risks or biases.20
Investigation follows clinical assessment, not before
If there is so little evidence quantifying the use of aspects of the neurological examination should we be relying on the historic proclamations of eminent neurologists? There is no doubt that imaging modalities have radically advanced in recent times and are readily available, but this should not lead to an ethos of scan first, clinically assess second or not at all. There are several pitfalls associated with relying on technology to diagnose or exclude disease (table 1).21–24
First, requesting the wrong scan (eg, a non-contrast CT head scan rather than a CT venogram in a patient with suspected venous sinus thrombosis, and there are many other examples).
Second, a wider recognition of potentially life-affecting or threatening diseases that can present with normal imaging, where the clinical assessment is crucial to establishing the diagnosis. This can be a particular problem for the non-neurologist clinician who can be falsely reassured by a ‘normal’ scan.
Third, the risk of overenthusiastic imaging where the clinical assessment does not require it, exposes patients to potential harm and anxiety, spawning a wide range of acronyms: victims of modern imaging technology (VOMIT)25; brainless application of radiological findings; ominous referrals for dubious unattested radiographic examinations and even a Campaign for Real Ailments. Others have voiced concern, regarding VOMIT as a flawed principle.26 Ultimately, it is easy for clinicians to criticise radiologists for reporting non-specific or benign findings, but it is crucial to appreciate the importance of the clinician requesting or, perhaps more importantly, not requesting a scan using the background of clinicopathological reasoning listed earlier. Compared with our predecessors, we have the luxury of an arsenal of imaging modalities at the click of a button, but if misused and misdirected this can ultimately lead to harm. Given the rising healthcare costs and use of imaging (an 86% rise in the number of CT scans and a 125% increase in the use of MRI since 2004–200527 ,28), it is essential that investigations are requested appropriately based on a proper clinical assessment. The costs of neuroimaging for headache in the USA now total almost $1 billion per annum, much of which is unnecessary, in spite of guidelines to reduce unnecessary imaging.29 This is simply not sustainable given the pressures to limit healthcare costs.
Even the founders of neurology like Gowers and Hughlings Jackson sometimes had to face the lawyers, but the spiralling costs of neurological litigation in the 21st century should startle anyone. A review of the records of the NHS Litigation Authority for over a decade revealed that the cost of all closed claims for neurological illnesses was £37 million (2% of all expenditure).22 Between 2007 and 2012, the NHS Litigation Authority have processed 120 claims in relation to neurology that have been settled with damages. Of these, 35 cases (29.2%) refer to misdiagnosis or mistreatment resulting from incomplete or inaccurate neurological examination as a reason for litigation. These 35 cases accounted for £3.6 million in damages.30 Clearly, the cost of failing to carry out an appropriate clinical assessment is high both for the patient and also their clinician.
Neurophobia vs neurophilia
Neurophobia, the fear of clinical neurology and neurological sciences, was first described in medical students in 1994 and was thought to result from an inability to apply basic science knowledge to clinical scenarios.31 It has since been shown to be prevalent in junior doctors and widespread throughout the world.32 ,33 Despite 10–20% of patients admitted to hospital having a primary neurological problem,34 medical students find neurology hard and challenging and do not feel comfortable performing the neurological examination on patients.12
Neurophobia stems from a lack of adequate undergraduate teaching. Neuroscience and clinical neurology are complex subjects to both understand and teach. The importance of integration between basic neurosciences and pathology is crucial to then allow application to a patient's clinical presentation. This integration is difficult to achieve with students often managing to reach competency but not proficiency; they can examine a patient but not interpret their findings to reach a diagnosis.35 This is in no small part due to the way neurology teaching is delivered in the UK. Students are taught neurology in many different healthcare settings by a variety of medical professionals, who themselves may have been taught neurology by a non-neurologist, and although it has not yet been proven to be contagious, neurophobia is potentially passed from teacher to pupil. Across all 30 UK medical schools, there is a wide variation in the length and structure of undergraduate neurology teaching, with nearly a third unable to guarantee teaching being delivered by a neurologist.35 While learning neurology in different settings has its benefits, the risk is that students receive mixed messages on what is, how to perform and how to interpret the neurological examination. Although neurology can be difficult, in the majority of cases it is clear that if the principles of history, examination, localisation and differential diagnosis are followed (and in that order), a diagnosis is often straightforward, but this can only occur through deliberate practice over many hours.5 This confusion and lack of understanding of how to carry out the clinico-anatomic approach leads to a situation where when faced with a clinical scenario the student is ill-prepared to deal with it, feeling lost in a challenging and complicated subject area without the necessary skills and knowledge to help.32 ,33 ,36
While there have been a multitude of discussions regarding neurophobia and its implications, a recently described term, neurophilia—a fascination by neurology—has been found to be commonplace in medicine.37
How can neurophobia be tackled? Some have advocated that students should start acquiring core knowledge and skills at neurological examination in the context of a clinical problem early in their clinical training and that learning neurological examination is not a single step process but one to be improved upon and refined over their training.34 ,36 Others have proposed that a hypothesis-driven approach rather than a screening approach to neurological examination should be taught as this is the method used by most neurologists.36 Neurologists should also communicate an overall strategy to other departments within the medical school and have increased interaction with general medical and general practice colleagues to hopefully prevent the dissemination of neurophobia.34 ,36 We would suggest that at the very least getting an understanding of the basics of the essential neurological examination12 ,14
We propose that neurophobia and neurophilia are two polar opposites on the same spectrum. This can be demonstrated using Miller's pyramid for assessing clinical competence as a guide (figure 1).38 Traditionally the pyramid has four stages: knows; knows how; shows how; and does. If we apply a clinical problem such as eye movement dysfunction as an example, then ‘knows’ would be knowledge of neuroanatomy associated with extraocular movements; ‘knows how’ would be applying that knowledge to the clinical problem, that is, knowing how to examine the eyes, what nystagmus is and what it means; ‘shows how’ would be examining extraocular movements, looking for saccades and smooth pursuits and being able to recognise the signs to reach a diagnosis of internuclear ophthalmoplegia (competency); and ‘does’ would be repeating the process outside the examination environment tackling the clinical problem themselves to reach proficiency (figure 2). As the individual progresses through the pyramid, their neurophobia reduces and tendency towards neurophilia increases, as they progress from novice through to expert (figure 3).
The key should be to aid the progression of medical students upwards through the pyramid (figures 1⇑–3). Neurologists are well placed to support this by coordinating neurology teaching at their affiliated medical schools to ensure consistency not confusion and ensuring that core skills are taught12 and audited.2 At a postgraduate level, neurologists can teach more advanced neurological examination skills based on a Bayesian approach21 and heuristics.20
History and examination are still and will continue to be cornerstones of clinical medicine, yet several studies suggest that physicians underestimate the impact of examination findings when estimating condition probabilities.21 While the clinician has a medley of investigative tools available to them, they should be used appropriately to aid the diagnostic process. Even in the modern era, there needs to be a wider dissemination to non-neurologists of the essential neurological skills they need to have for good quality patient care—national neurological associations should lead on this and provide guidance on acceptable, yet realistic, skill sets.
Contributors DJN performed an initial draft and literature review. This was then reviewed, rewritten and added to by JPA. Both authors agreed on a final draft.
Competing interests None.
Provenance and peer review Commissioned; externally peer reviewed.