Speech Problems Could Be Corrected Before Child Learns to Talk
Newark, N.J.-April 09, 2008 - Uncover how the brains of infants distinguish differences in sounds and it may become possible to correct language problems even before children start to speak, sparing them the difficulties that come from struggling with language.
New studies conducted by Professor of Neuroscience April Benasich and her Infancy Studies Laboratory at Rutgers University in Newark are revealing new and exciting clues about how infant brains begin to acquire language and paving the way for correcting language difficulties at a time when the brain is most able to change.
Benasich and her lab were the first to determine that how efficiently a baby processes differences between rapidly occurring sounds is the best predictor of future language problems. Using methods developed by Benasich and her lab, it can be determined as early as three to six months whether a baby will struggle with language development.
Professor April Benasich (upper right) gently covers a baby's head with sensors that reveal how babies process rapidly occurring sounds, a key factor in language development.
Benasich's research is now focused on uncovering in specific detail how the developing brain processes and distinguishes acoustic differences that arrive in rapid succession. The ability to differentiate those sounds, such as the difference between "ba" and "da," is critically important because decoding language requires us to process tiny auditory differences occurring as quickly as 40 milliseconds. During the first months of life, the baby's developing brain also is involved in constructing an acoustic map of the sounds of his or her native language. That map allows the baby to efficiently acquire language. Apparently, however, in some infants the process seems to go awry.
About 5 to 10 percent of all children beginning school are estimated to have language-learning impairments (LLI) leading to reading, speaking and comprehension problems, according to Benasich. In families with a history of LLI, 40 to 50 percent of children are likely to have a similar problem. Many of these children go on to develop dyslexia.
Using several novel methods, including dense array EEG/ERP recordings, Benasich and her lab are able to analyze EEG, ERPs and the proportion of gamma power in infant brains. The dense sensor array allows the researchers to gently measure a full range of brain activity. Those measurements are obtained by placing a soft bonnet of sensors, resembling a hairnet with lots of little sponges, on a baby's head and then having the infant listen to different series of rapid tone sequences.
"We are finding that children who have difficulty processing rapid auditory input are not just showing a simple maturational lag, but are actually processing incoming acoustic information differently," says Benasich.
Specifically, the research shows that babies who struggle with rapid auditory processing appear to be using different brain areas (as shown by neural patterns) and perhaps different analysis strategies to accomplish that task than children who do not have such difficulties. Included among their initial findings, the researchers have found less left hemisphere activity in the brains of children who struggle with rapid auditory processing as compared with matched control children. By pinpointing the exact differences in how the brain handles incoming acoustic information, it may become possible to guide the brains of babies at risk of developing language problems to work more efficiently before the children even begin to speak.
"We can predict with about 90 percent accuracy what a baby's language capabilities will be just by their response to tones," says Benasich. "Our hope now is that we will be able to gently
guide the brains of infants who are at the highest risk for language learning impairments to be more efficient processors so they can avoid the difficulties that result from struggling with language."
To shed additional light on how inefficiencies in rapid auditory processing might be corrected, Benasich and her team have developed a Magnetic Resonance Imaging (MRI) protocol for scanning naturally sleeping healthy babies. This technique will allow better localization of active brain areas. To solve the challenge of imaging the brains of young children who typically are unable to lie still for extended periods in a scanner, Benasich's team conducts the scans in the evening and asks the parents to go through their child's normal bedtime routine, such as reading their infant a story, nursing them, rocking and snuggling. Once the child is asleep, headphones providing a steady stream of lullabies and an acoustic foam bonnet are placed on the baby's head to reduce the sound of the MRI.
"Our goal is not only to develop training techniques to correct rapid auditory processing problems, but to identify the period during infant development when the brain is most "plastic," or most able to change through learning," explains Benasich.
The lab's work is funded by several sources, including grants from the Solomon Center for Neurodevelopmental Research, the Don and Linda Carter Foundation, the National Institute of Child Health and Human Development, and a new $460,000 grant from the Ellison Medical Foundation.
For more information on the research being conducted by the Infancy Studies Laboratory at Rutgers University in Newark, please visit babylab.rutgers.edu.
Taken from news.rutgers.edu.