Specific language impairment and the earliest stages of auditory processing

Yvonne Falk, October 2008


A personal note
As a school psychologist at a school with mostly bilingual and trilingual students from families with low SES I find lacking language abilities in the majority of students. Each year I send around ten referrals to speech therapists. However, they assess using performance tests, and many times it is hard for them to differentiate between lacking language abilities due to lacking language stimulation, or due to language impairment. Therefore I am interested in physiological ways of detecting language impairment.

Specific Language Impairment
Children with Specific Language Impairment (SLI) have problems in speaking or understanding despite normal cognitive development and peripheral hearing. Around five1 to seven2 percent of otherwise normally developing children suffer from this problem. 
The underlying deficit of SLI is not yet known1 and might be different for different types of SLI.2 SLI has in many studies been associated with a set of heterogeneous impairments.1 Researchers look at deficits in the linguistic system, at more general deficits in sensory/cognitive mechanisms such as attention;3 and at more specific deficits, such as difficulties in lower-level auditory processing.1

Cause of SLI
In the mid-1970s, there was a tendency to assume that SLI was caused by factors such as poor parenting, subtle brain damage around the time of birth, or transient hearing loss. Subsequently it became clear that these factors were far less important than genes in determining risk for SLI. 
Evidence for genetic influence comes from twin studies showing that monozygotic twins, who are genetically identical, resemble each other in terms of SLI diagnosis more closely than do dizygotic twins, who have 50% of their segregating genes in common. Statistical analysis of twin data shows that the environment shared by the twins is relatively unimportant in causing SLI, whereas genes exert a significant effect, with heritability estimates typically ranging from around .5 to .75 for school-aged children.
There seems to be no single gene for language, but it seems likely that in the majority of cases the disorder is caused by the interaction of several genes together with environmental risk factors.4

Neural correlates
Several researchers have found a connection between perisylvian polymicrogyria and SLI. Polymicrogyria is an anomaly of cortical development in which neurons reach the cortex but are abnormally distributed, resulting in the formation of multiple small gyri.5
Other researchers examined the asymmetry of Broca's area. One group found that in normal control boys and in autistic boys without language impairment, Broca's areas is larger in the left hemisphere compared to the right hemisphere. In boys with SLI, as well as in boys with autism and language impairment, Broca's area is larger in the right hemisphere. Thus, there is a reversal of asymmetry for boys with language impairment.6 

Yet another way to address the underlying impaired brain mechanisms in SLI is to investigate the earliest stages of auditory processing of passively heard tones or morphemes,1 which is the focus for this essay. "Passively heard tones" means that the subject's attention is not focused directly to the tones, but to something else, for instance cartoons. 

Evoked brain responses
There are two different types of measures for early brain responses: auditory event-related or evoked potentials and auditory magnetic fields.
Auditory evoked responses mature with age, and are different for children than for adults. Already the newborn brain detects changes inside the auditory speech stimulus, but the immaturity of the brain is reflected in a number of ways, one of them being prolonged latencies. That is, P1m-N1m-P2m occurs later in children than in adults. This complicates matters when comparing different studies on the effect of SLI on evoked brain responses,1 also because a delay in maturation is suggested as a cause for SLI.1+2
The results of studies are inconsistent. A couple of studies have found no differences in amplitude or latency of P1,1 N1 and P22 or of the P1-N1-N2 response sequence;1 whereas other studies did find differences. One of the reasons for the inconsistent results could be the heterogeneity of SLI.2

Mismatch negativity
Another widely studied evoked component is the mismatch negativity (MMN). It reflects the effect of an occasional deviating stimulus in a sequence of frequently repeated ‘standard’ stimuli. A lower MMN amplitude is theorized to reflect poorer sound discrimination.
In children with SLI, a reduced MMN amplitude to tone frequency changes was reported in several studies. Other studies found a reduced MMN to syllables or differences in the MMN latency between SLI and control children, but not all results were replicated.1

A personal note on studies based on sound discrimination
Sound discrimination is not a culturally neutral measure. For instance, I live in Sweden, but my mother tongue is Dutch. Compared to Swedish people, I have greater difficulties in discriminating between the Swedish vowels /u/ and /y/, because the sound of /y/ is not part of my mother tongue. If it is true that (for instance) a lower MMN amplitude reflects poorer sound discrimination, I should get lower results on both behavioral and MMN tests that use /u/ and /y/. Yet I don't suffer from SLI. Important to remember when testing bilingual children! -YF

Language impairment is reflected in auditory evoked fields (2008)

The study
In the study by E. Pikho et al., 22 Finnish bilingual preschool children participated, half of them with SLI, half of them with a normal language development. Mean age was 6.6 years; range: 5–7 years. Two sets of  consonant-vowel syllables were used, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The changing consonant stimuli set involved fast frequency changes; the other set did not.
During magnetoencephalography (MEG) recording, the children watched silent cartoons and were instructed not to pay attention to the auditory stimuli. The strength of the equivalent current dipoles for the P1m and P2m responses was measured. After each MEG session the ability of the children to discriminate the stimuli was tested behaviorally.

The results
The researchers found that in behavioral tests the control group discriminated /ba/ and /ga/ from /da/ better than the SLI group. There were no differences between the groups in discriminating /so/ and /sy/ from /su/.
It was found that the P1m responses for onsets of repetitive stimuli were weaker in the SLI group compared to the P1m responses of the control children. With other words: there were small, but statistically significant differences in the sensory encoding of speech stimuli in the SLI group compared with the normally developing controls.
No differences were found in the strengths of the P2m responses, or in the mismatch responses to any of the stimulus changes.

A difference between this study and the previous inconclusive studies is that this study was more limited in its age range, and the reduced amplitude variation due to age differences may have aided in making the small P1 effect statistically significant.
The diminished P1 in SLI found in this study should not reflect immature sound processing, since the P1 decreases with age (unlike the N1 and P2). Therefore, it is viable to suggest that in our group of SLI children, the sensory encoding of some of the stimulus features may have been slightly depressed.
Not finding an effect on MMN is consistent with other studies in which there was no clear-cut, direct connection between MMN and behavioral linguistic discrimination tests. Consequently, improved MMN paradigms should be considered. A word of caution: even though this study suggests that mildly depressed sensory processing of auditory phonemic stimuli may be associated with SLI, but it is improbable that there is just one ‘single’ underlying reason that can explain SLI.1

Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment (2007)

The study
In the study of D. Bishop et al, data were used from a previous study by Uwer et al. (2002). 63 Children with SLI or typical development participated; age range: 5 – 10 years. The children performed a behavioral test of consonant discrimination: /da/ba/ga/. Event related potentials (ERPs) were recorded of responses to standard stimuli tones or speech syllables that were passively presented.  Auditory ERPs were reanalyzed using the intraclass correlation coefficient (ICC). The ICC for the period 100-228 ms (encompassing N1 and P2) was calculated between datapoints between two waveforms: a normative waveform, which was the grand average for the control group, and a comparison waveform, which is the waveform of an individual child.

The results
It was found that the main effect of group was non significant, but the SLI group obtained significantly lower ICCs for tone stimuli at certain electrodes on the right side of the head, indicating a greater variability from child to child. The results suggested that only a subset of children with SLI have atypical ERPs. This “low ICC” subgroup did not perform worse than the “average ICC” subgroup on behavioral measures, but those with receptive SLI were more
likely than controls to have atypical waveforms, whereas this was not the case for children with purely expressive
problems. No other differences between subgroups were found.

This study confirms suggestions that children with SLI are heterogeneous, with some showing normal auditory ERPs and other differing from controls.
The ICC analysis and a scrutiny of brain maps suggests that in some children with SLI there is atypical lateralization of brain responses to sounds, which has also been found in other studies. One possible explanation for this is that brain organization in some children with SLI is qualitatively different from that in typical development, presumably because of genetic influences on prenatal brain development.2

Auditory evoked fields predict language ability and impairment in children (2008)

Both children with Autism Spectrum Disorder (ASD) and with SLI show language difficulties. Historically, the presence of autism has been exclusionary to diagnosis of SLI because it was assumed that language impairments in children with autism are secondary to their ASD-associated problems. Recently, however, it has been argued that the language impairments in autism and SLI may overlap. In a number of studies, a subgroup of children with autism has demonstrated similarities to children with SLI in patterns of performance on standardized language measures, phonological processing, and grammatical morphology. Separate research streams have identified impaired auditory perceptual processing in children with autism and children with SLI. This is further supported by a number of MEG and ERP investigations that have demonstrated atypical neural responses to various auditory and speech stimuli in children with ASD or SLI, such as atypical M50/M100 responses to rapid temporal stimuli in both autism and SLI, delayed MMN to speech and tones in autism, and accelerated MMN to tones in Asperger.

The study
In the study of J.E. Cardy et al, 45 children with typical development, autism (with LI), Asperger Syndrome (i.e., without LI), or SLI participated. Mean age was 11.8 years; range: 7 – 18 years. 110 trails of a tone were binaurally presented, without requirement to respond to the tone. Latency of left hemisphere (LH) and right hemisphere (RH) auditory M50 and M100 peaks was recorded.

The results
The only strong significant result was found for the RH M50 latency that predicted oral language ability (occurring later for children with LI). Nonverbal IQ and ASD-associated behavior ratings were not predicted by any of the auditory evoked fields. There is a known dependence between age and latency, but statistical analyses showed that age did not change the results.
When the researchers limited LI to receptive LI (with or without autism), the results were much stronger. Latency of RH M50 got an 82% accuracy in predicting receptive LI.

The most important finding is that receptive language functioning can be predicted by especially the RH M50 to a simple, non-linguistic auditory stimulus, and so could potentially serve as a quantitative indicator of this specific diagnosis. It does not predict more general brain (dys)function. This finding may also constitute a key neural dysfunction underlying the overlap between subgroups of children with autism and SLI. Possibly there is a RH language dominance that could reflect a failure of the normal process of (LH dominant) language lateralization.7

Final words

The studies show that there seems to be a connection between receptive SLI and P1. However, iP1 is not a completely certain measure of receptive SLI: not all children with a deviant P1 suffer from receptive SLI, and not all children with receptive SLI show a deviant P1. 
I have only examined three studies, but many more have been done, and their results are inconsistent. In my opinion the ones that did not show any connection should be reanalyzed, in order to see if the lack of connection is due to the heterogeneity of SLI. In many studies, the SLI group comprises both children with impressive SLI and children with expressive SLI.
To diagnose a child with SLI is always ultimately a matter of judgment. We weigh several pieces of information: behavioral test results, what difficulties does the child experience in daily life, what do we know about language exposure? Measurement of evoked responses could give us yet another indication. Maybe it won’t be so long before measurement of early brain responses can be part of the information on which a diagnosis is based.

1. Pihko, E., et al., (2008). Language impairment is reflected in auditory evoked fields, Int. J. Psychophysiol. 
2. Bishop, D.W.M., et al., (2007). Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment, Developmental Science
3. Stevens, C., et al., (2008). Neural mechanisms of selective auditory attention are enhanced by computerized training: Electrophysiological evidence from language-impaired and typically developing children, Brain Res.
4. Bishop, D.V.M. (2006). What Causes Specific Language Impairment in Children? Current directions in psychological science.
5. Rocha de Vasconcelos Hage, S. (2006). Specific Language Impairment. Linguistic and neurobiological aspects. Arq Neuropsyquiatr.
6. De Fossť, L. (2004). Language-Association cortex asymmetry in autism and specific language impairment. Annals of Neurology.

7. Oram Cardy, J.E., et al., (2008). Auditory evoked fields predict language ability and impairment in children, Int. J. Psychophysiol. 

For any comments, please mail me!