Statistics from Altmetric.com
Word deafness (also known as auditory agnosia for speech, or as auditory verbal agnosia) is a rare neurobehavioral syndrome characterised by an inability to understand spoken language in spite of intact hearing, speaking, reading, writing, and ability to identify non-speech sounds. The lesions associated with this condition tend to be bilateral and symmetrical in nature, and include cortical-subcortical tissue of the anterior part of the superior temporal gyri. However, Heschl’s gyrus is not always damaged completely in the left hemisphere. Moreover, there have been documented cases of word deafness caused by unilateral left hemisphere cortical and subcortical lesions.1 Although these lesions are anatomically different, they represent an effective partial hemispheric disconnection.
Hemispheric disconnection has been associated with unusual disruptions of emotional processing. Bowers and Heilman2 reported a patient with a lesion of the deep white matter of the right occipito-temporo-parietal region. This patient could name famous faces and discriminate affectively neutral faces, but could not name facial emotions or select emotional faces reflecting a named emotion. Bowers and Heilman hypothesised a visual-verbal disconnection resulting in an anomia for affective faces. More recently, Bowers, Bauer, and Heilman3 further articulated this idea, suggesting that this patient’s performance resulted from a disconnection between a hypothesised non-verbal affect lexicon in the right hemisphere and the verbal lexicon in left hemisphere, which normally communicate via the deep white matter pathways damaged in their patient.
The documented association between hemispheric disconnection and anomia for facial emotion raises the possibility that similar deficits in emotion processing may be observed in word deafness.
WD1 was a 45 year old man who had suffered a left posterior temporal lobe hemisphere CVA two years previously. MRI had demonstrated an acute lesion of the left temporal lobe and a chronic lesion of the right temporal lobe. His new stroke produced an initial Wernicke’s aphasia. A pre-existing high frequency sensory hearing loss was also documented. By 18 months after the stroke, the aphasia had resolved and WD1 underwent formal neuropsychological testing with the following results.
Auditory comprehension was limited to single (maximum of two syllables) concrete nouns—for example, square or circle from the token test and adjectives such as yellow or red. The words he did understand had to be spoken slowly, loudly, and at a low pitch. He seemed to have general difficulty with rapid tonal transitions that mimic speech sounds, as in the speech sounds perception test and the seashore rhythm test.
Reading comprehension was grossly within normal limits. He did demonstrate problems with complex syntax and evidenced occasional paraphasic errors. This may have been residual from his acute Wernicke’s aphasia. On the whole, his speech was functional.
He was able to differentiate and accurately recognise a range of environmental sounds, although he had trouble with high pitched sounds. His recognition was fast and accurate.
He had no apraxia or other motor problems, and he was able to communicate by gestures.
Overall, the results of his neuropsychological evaluation were within normal limits. His specific deficits were consistent with those seen in word deafness.
We administered a modified version of the Florida Affect Battery (FAB),4 including both facial and vocal prosody subtests, in an attempt to determine whether word deafness was associated with a disruption in the processing of affective prosody. The FAB consists of 10 subtests that evaluate emotion processing by different modalities: visual (facial expression), auditory (prosody), and visual/auditory cross-modal. WD1’s performance was compared with that of 20 healthy adult controls. The test was modified, in that all instructions and emotion labels were presented in written form rather than orally.
WD1 performed at chance level on the prosody tasks, regardless of their affective content. This may have been related to a premorbid occupational sensory hearing loss. The possibility that his word deafness also contributed to his poor performance cannot be ruled out. However, the relative influence of word deafness cannot be determined in the absence of control subjects with impaired hearing.
WD1 was able to complete the visual subtests of the FAB, and his ability to discriminate facial identity and facial affect was within normal limits (table 1). His ability to match a stimulus facial expression with one from a target array was also within normal limits. However, he was moderately impaired relative to controls in his ability to match a printed affective name to facial expressions. He was also severely impaired in his ability to select the correct affective face from an array of faces when presented with a printed emotion label—that is, happy, sad, angry, frightened, neutral—despite intact reading and ability to discriminate affective facial expressions.
WD1’s pattern of performance on the FAB was identical to that of Bowers’ and Heilman’s patient,2 and consistent with a visual-verbal disconnection. This finding raises the possibility that a very specific disturbance of visual affect processing is a component of the word deafness syndrome. However, many neurocognitive syndromes lack a unitary functional basis and instead are an artefact of the behavioural geography of the brain.5 That is probably so with the affective processing disturbance observed in this case. The documentation of intact naming of affect in another word deafness case would answer this question definitively. At the same time, the functional auditory deficits and characteristic neuroanatomy of word deafness raise intriguing questions about the status of auditory emotion processing in word deafness, in view of this patient’s preserved ability to identify non-speech sounds.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.