Statistics from Altmetric.com
Diagnostic MRI in dementia has focused, to date almost exclusively, on atrophy patterns. Although sometimes useful, readers will be most familiar with ‘No focal lesion, a degree of volume loss possibly more than expected for age’ reports. Things get worse with differential diagnosis. This is particularly evident in frontotemporal dementia (FTD) in which most patients fall into one of the two pathological groups: TDP-43 or tau. There are no specific imaging markers for either, so one is left with known relationships to clinical phenotypes to guess at the pathology: semantic dementia, TDP-43 quite likely; non-fluent aphasia, tau somewhat likely; and behavioural variant, toss a coin. In fact, if one strictly controls for clinical phenotype, tau and TDP-43 can show identical atrophy patterns.1 Atrophy as a diagnostic marker also suffers from being a function of disease severity meaning that it is most apparent in advanced disease, that is, when least needed. Imaging markers of disease ‘state’ (eg, tau vs TDP-43), independent of severity, would be far more useful for diagnosis. This means looking for markers other than atrophy.
There is a pathological precedent to suggest that white matter (WM) might separate tau and TDP-43. The FTD-associated tauopathies—for example, corticobasal degeneration, progressive supranuclear palsy, Pick's disease—are associated with extensive tau-related WM pathology.2–4 McMillan et al5 investigate the ability to separate tau and TDP-43 by using the conventional atrophy approach and WM diffusion MRI. WM diffusion emerged as such a resounding winner (almost perfect, whereas atrophy was just above chance) that assessing the added value of combining both modalities was rendered obsolete. Arguably, the result is even more compelling because the groups were poorly matched for clinical phenotype (which could assist the atrophy classifier) and some cases had a fairly unsophisticated diffusion acquisition (which could penalise the diffusion classifier).
The authors used a machine for classification—a technique that could offer diagnostic translation. The principle is that in data containing a lot of information, a computer identifies elements that best separate two groups. First, the computer uses a training dataset to find best separation; examining how accurately it can bin an independent sample then tests the robustness of the solution. The problem is that often there are insufficient cases for independent training and test phases. McMillan et al had this problem so had to employ a compromise approach: ‘leave one out’ (LOO). LOO means using all bar one of the dataset for training and then iteratively testing each omitted case to see whether it is correctly classified.
A major concern for machine classifiers with small numbers and LOO is ‘overfitting’—finding a solution that is unique to the dataset because it is identifying disease-independent idiosyncrasies. Explaining this by analogy—imagine six men, three of whom have Alzheimer's disease. The Alzheimer's cases have either a beard or wear glasses, while this is true for none of the other three. A machine that used ‘beard’ and ‘glasses’ to divide the groups would perfectly diagnose Alzheimer's disease, yet the solution is useless for new cases because the classifier identified idiosyncrasies that had nothing to do with Alzheimer's disease.
As the machines rise, it becomes important that clinicians understand such limitations. The critical point of McMillan et al's study that makes it credible, however, is its strong pathological rationale. Something tells me that tau diagnostics and WM have a bright future.
Competing interests None.
Provenance and peer review Commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.