AudiologyOnline Phone: 800-753-2160


HearUSA - Newsweek - September 2024

Beyond ANSI Standards: Acoustic Accessibility for Children with Hearing Loss

Beyond ANSI Standards: Acoustic Accessibility for Children with Hearing Loss
Jane Madell, PhD, CCC-A/SLP, LSLS Cert AVT, Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
September 24, 2012
Share:

Editor's Note:This text-based course is a transcript of the live seminar, "Beyond ANSI Standards: Acoustic Accessibility for Children with Hearing Loss," presented by Jane Madell, Ph.D., and Carol Flexer, Ph.D. Please download supplemental course materials.

Carol Flexer: Hello, this is Carol Flexer. Thank you all for attending today. Our talk today is Beyond ANSI Standards: Acoustic Accessibility for Children with Hearing Loss. The purpose of this talk is that it is important for the brain to have access to sound in order to develop, maintain and sustain listening and spoken language. We are going to talk about what acoustic accessibility looks like, and issues of not only attaining but also sustaining language. We will also discuss some of the necessary acoustic requirements for proper acoustic access, strategies that are designed to identify acoustic access, and techniques for repairing that access when it is diminished. So let's get started.

As professionals, our treatment and technology recommendations are all focused around the family's desired outcome for their child, and that outcome may look different at various ages. What does it take to get there? We know that 95% of children with hearing loss are born to hearing/speaking families, which means that the vast majority of our families are going to be interested in a listening and spoken language outcome. Today we are going to continue to develop the context of what acoustic accessibility looks like. Professional collaboration is so critical in order to attain and sustain listening and spoken language outcomes and to manage technology in environments so that those outcomes can be possible.

Audiological information is absolutely critical, because audiologists are the ones that diagnose hearing loss, program the hearing aids, map the cochlear implant processors, and connect frequency-modulation (FM) systems to these technologies. While audiology is pivotal, we cannot do it alone. Because of all newborn hearing screenings and burgeoning neurological research, the whole landscape of deafness has changed. If spoken and literacy skills are desired outcomes, hearing is the first-order event for that. Any time we use the word hearing, we mean auditory brain development. I hope all of us now are talking about hearing as a brain issue and not an ear issue, because in reality, the hair cell damage or lack of growth has kept sound from even reaching the brain. In order to develop the neural centers you have to get the sound there first. This whole idea is if we can overcome that sensory deficit using technology and get auditory information to the brain, then we can develop a hearing brain and define the critical auditory pathways.

In order to develop the auditory pathways, we have to access intelligible speech that is free of distortion. We also need to hear individual speech sounds at soft levels as well as average levels, at distances and up close, in noise as well as in quiet. Signal-to-noise ratio is the key. In order to get to the brain, sound has to travel through an environment, through the child's technology and through the various facets of the auditory system. We have to always consider all environments. Acoustic accessibility has to be the first thing on our plate in order to reach the outcomes of listening spoken language literacy.

We have a lot of emerging and continuing brain research that provides a strong science to the practice that we have. There are massive amounts of auditory designated tissue in the brain, and we know from research that important changes occur in the higher auditory centers, depending on the information that arrives at the brain. Remember that the brain can only organize itself around the stimuli that it receives. There is no magic there. If the brain does not receive sufficient ongoing and early stimulation, then the brain will be organized differently than if all speech sounds reach the brain from the beginning. Clearly we know the auditory cortex is directly involved in speech perception and language processing, and normal maturation of the auditory system is a precondition for the normal development of language skills.

We probably grossly underestimate how much practice and stimulation the brain requires in order to grow, develop and cement those neural connections. We can look at the broad literature and find that Malcolm Gladwell, author of The Tipping Point and Outliers, discusses that it takes at least 10,000 hours of practice to become an expert in a skill. Hart and Risley (1995) identified the research that 46 million words are heard by age four by children in professional families. Dehaene (2009) has done research showing that it requires 20,000 hours of listening as a basis of reading, while Pittman (2008) asserts that children with hearing loss require three times the exposure to learn new words and concepts due to a reduced acoustic bandwidth. If you have a less robust extrinsic signal, you have to have more practice in order to grow the internal intrinsic pathways. Hart and Risley (1995) continue to look at the implications of practice, as the number of words a child knows is directly related to the measurement of their verbal I.Q. I will turn it over to Jane now.

Jane Madell: If a child has appropriate technology to maximize acoustic accessibility and they also have an enriched auditory exposure, they are going to have good auditory brain development. As Carol said, the reason we need this auditory brain development is to help kids develop enough skill to use the technology that we are providing so that they can learn language, which is critical for developing literacy skills. If a child does not have appropriate technology, even if they have enhanced auditory exposure, they are going to have reduced auditory brain development, resulting in poor language and literacy skills. If a child has good auditory technology with plenty of acoustic accessibility but they have poor auditory exposure, they are still going to have reduced auditory brain development. That combination is critical. Kids need appropriate technology, appropriately set, so that it provides enough acoustic accessibility, coupled with enriched auditory exposure. If a child does not have good auditory brain development, they will not have good language or good literacy. That is the way it is.

The brain is the real ear. The ear is just the first part of getting the signal in. If we do not get the signal in, we are not going to get the auditory development that we absolutely need. So what can we do to get acoustic access to the brain? The biggest problem for all degrees of hearing loss is ensuring enough acoustic access. Technology is often not programmed appropriately for what kids need today. We make assumptions about the way we program technology and we are not always correct. The evidence has to be ongoing. We need to evaluate the auditory environment and monitor kids on a regular basis, daily even, to make sure the technology is doing what it is supposed to be doing. We cannot assume. If a child is not progressing the way we expect and if everyone working with them has enough high expectations to demand good listening, then we need to expect that either the technology is not working or it is not providing enough acoustic accessibility before we worry about some other things.

Hearing loss is not about the ears; it is about how the brain is working. Hearing aid, FM systems and cochlear implants are only tools to get the signal in so the child can hear what they need to hear. The audiologist is who makes this possible with hearing loss. We are the ones to adjust the technology and who can assess the acoustic environment and decide whether changes need to be made in the technology or in the acoustic environment. That is our responsibility. So if a child is not progressing, first we need to suspect the technology. Is the child hearing well enough and is the child hearing high frequencies? You cannot know that if you are not testing the child. Is the child wearing the technology consistently? I did a home visit recently where the family was expecting me. I walked into the house, and the child did not have his hearing aids on. I was upset and disappointed that this child had already been up for a couple of hours and did not have hearing aids on. What can we expect from that? We know if a hearing-impaired child uses technology for only four hours a day, it will take that child six years to hear what a child with normal hearing hears in one year. That, to me, is one of the most critical pieces of information I know. Think about what that means for language development and literacy. Even with excellent technology, children with hearing loss are listening through a damaged auditory system and therefore, will need more exposure to learn. If they take technology off, they are missing valuable time. Kids need to be hearing on a full time basis.

Carol Flexer: One way to express this concept of full-time hearing is to consider a hypothetical concept of earlids. Humans do not have earlids; the ears do not open and close, which means our brains were designed for 24 hour access of sound. That is not a design flaw. Children with hearing loss only have auditory stimulation when they are wearing technology, and none of today's technologies are engineered for 24 hour wear, but some children do want their technology on, even when they sleep. This does not mean that they should, however. We do not know how much exposure the brain needs. We know that it is designed for 24 hour access, and recommend that patients keep the hearing aids on every waking moment. If the eyes are open, hearing aids should be on so they receive massive amounts of auditory stimulation.

As Jane mentioned, if the child is not progressing, the first thing we must verify is the technology and the acoustic accessibility. Then we also need to investigate if the family has appropriate expectations for the desired outcomes based on the child's hearing loss. Do the clinicians who are working with the families have accurate information about hearing loss and possible outcomes? What about the child's environment? It cannot be stressed enough that the environment is not just school. For example, families often ask, "Does my child need to wear their hearing aids after they come home from school?" All that is at stake is their child's brain. We can access, stimulate, grow, and develop neural connections by wearing technology consistently, or we can literally lose auditory capacity. When we take the technology off, it is like putting that child's brain on a shelf. All of the environments a child may encounter are necessary for social learning as well as for academic learning, including the home, playground, car, therapy room, school, et cetera. The brain is designed for access all the time in every environment.

Jane Madell: How do we know that the technology is doing what it needs to do to meet the child's acoustic accessibility requirements? We have to verify that they can hear nearly everything, because otherwise they are not going to develop language. The first thing to check is that the child is hearing throughout the entire frequency range. They need to hear 6,000 and 8,000Hz, not just out to 4,000Hz, as was commonplace practice for many years. Those frequencies do matter for perception of sibilants and fricatives, and a lot of hearing aids are not providing that, although cochlear implants are. But we need to know this because otherwise they are going to be missing high-frequency sounds, which is substantial for several significant grammatical markers. They are not going to get plurals, possessives and unstressed prepositions, and those are all critical for language development. If kids do not hear them, not only will they miss out in running conversation, but they will likely suffer a language delay and academic and literacy delays.

Children need to be able to hear soft speech. Much of what kids learn they learn by overhearing. How many times does a child answer a question that you asked your spouse from the next room over? At the time, it is annoying, but when we think about it, that is absolutely critical for language development. Soft speech, which is at about 35dBHL, is important. If a child's hearing aids are set so that they are hearing at around 30dB, they are hearing soft speech at such a low level that it is not useful for them. We need to be certain that children are hearing soft speech at a comfortably-enough loud level so that they are able to learn language using it. They need to overhear conversation and they need to hear things around them.

Mary Pat Moeller reported at the ASHA convention in 2011 from her research (Moeller, Hoover, Peterson, & Stelmachowicz, 2009) that 40% of children were fit with hearing aids that were underfit. That is a startling number. She is also showing that kids are not getting acoustic accessibility. On the other hand, the goal of aided hearing is not to hear at 0dB because we may be over amplifying and causing distortion. Around 15-20 dB will give them enough access to normal and soft conversation that they should be able to learn language well.

Technology obviously needs to be distortion free. Children with hearing loss have more difficulty managing distortion than children without hearing loss, so we need to be sure that the technology is as clear as possible. New technology is so much better than what was available when Carol and I started in this field, and we are not telling you how long ago that was! But you need to be careful about even the new technology, because some of the special features of technology can cause distortion. The timing and the activation features can cause some critical issues and some of them may reduce acoustic accessibility. For example, we have to be careful about looking at how the feedback control on a hearing aid works. If it works by cutting out high frequencies, then it is not doing what we need it to do, because the child is not going to be hearing things we absolutely know they need to hear. We want to make sure there is no distortion between the way the FM is connected to the hearing aid or cochlear implant (CI) processor. That is something that needs to be checked on a regular basis without fail. We cannot assume. We need to know about noise and reverberation and what is going on in the classroom or home situation as far as noise and reverberation is concerned. This has an effect on the hearing aid and we know it is going to distort the signal, making it hard for the child to hear, so we need to be certain that we do everything thing we can do to control the noise and reverberation in the environment. Even tiny kids will need an FM system in most situations, (and most situations are noisy), so we can assure they are getting the signals in a clear and consistent manner.

Let's talk about speech intelligibility which is, after all, the goal of putting technology together. Every speech sound needs to be audible. People frequently test using the Ling-six sounds, and we need to make sure these kids are hearing all of them at typical conversation levels, about 50dBHL, and at soft conversation levels, 35dBHL. They also need to be able to hear loud conversation, because there are lots of times when loud conversation exists and we want to be sure that the intensity of these signals are not causing any distortion. As previously mentioned, children need to be hearing soft speech so that they can overhear conversation and use it for incidental learning. We often refer to the speech banana when we present an audiogram or explain hearing loss and speech to a parent. The top of the speech banana at lower thresholds represents 90% audibility of what is said. The bottom edge of the speech banana corresponds to hearing only 10% of what is said. So when someone says, "We want the child to hear in the speech banana," I disagree. You can be hearing in the speech banana and still hear only 50% or less of what is said, and that is not enough. Our goal is to have the child hearing at the top of the speech banana so that they are getting at least 90% of what is said.

Mead Killion and Gus Mueller (2010) in their Count-the-Dots audiogram estimate speech perception by counting how many dots below plotted threshold on an audiogram the person hears. (A copy of the Count-the-Dots audiogram may be obtained from The Hearing Journal) It is important to realize that what we are talking about is speech perception presented at normal conversational levels. If we presented at soft conversational levels, we need to also know what the child's speech perceptionscore is. I would like to recommend that we forget the speech banana and talk about the speech string bean. Our goal is to have a child hearing at the top of the banana, or string bean, which would be a more conservative measure, with more stringent criteria of "good" hearing.

Carol Flexer: How does the child's auditory environment affect technology decisions? How many of you already have apps on your phone for sound level meters? There are numerous applications that can be downloaded on Smartphones for measuring sound or sound level. Some are free, some cost money. Most of them give you an estimate. You can make a Type II sound level meter out of your phone by using a particular application plus buying a microphone that you can calibrate. If you are going to use this method as opposed to a true sound level meter, make sure you get legally defensible sound meters on your phone. I have a digitized sound level meter on my phone that gives me a general idea of the sound in environment, and I recommend every family, every clinician have sound level meter on their phone. If acoustic accessibility is absolutely critical to grow the brain, then we have to have some sense of the distortion, which would be the noise interference, and the noise level in a particular environment. Every environment is going to have different sound, and every place within that environment is probably going to have a different level of noise, depending on the noise sources. Having a reliable app on your phone is a convenient way to measure any environments. I just want to put in a plug for a book that is coming out that Joe Smaldino and I edited (2012) called Handbook of Acoustic Accessibility: Best Practices for Listening, Learning and Literacy in the Classroom. There is a whole chapter in the book specifically about sound level meter apps and reverberation apps available on our Smartphones that can make a huge difference in understanding, managing and being aware of acoustic accessibility.

With that app you can determine how noisy it is the home. Noise varies with circumstance, with the room, and with what is going on in the room at a particular time. What about daycare? That is a nightmare for many, right? You know, if we are putting a child in inclusive environment, which we agree is important ecologically because we want that child to be around typical language and behavioral models for children of their age. But if we are going to put them in that environment, they ought to be able to hear in that environment, which means hearing the teacher and other children. In order to hear other children, they have to hear soft speech. We need to be aware of what the noise scenario is like. What is the reverberation like in that environment? What would be the point of putting a child in this great ecological inclusive environment if they cannot hear what is going on in that environment? What about after school activities? If we feel that after-school activity is valuable for the child, then they ought to be able to hear in that activity, which means in almost all instances using a remote microphone or an FM system. But we have to start by distinguishing the acoustic barriers that occur in any child's environment.

Should an infant or child have an FM? You bet. Noise is a factor even for infants, and in fact, the younger the brain, the better the signal to noise ratio and extrinsic redundancy must be because we are growing those auditory neural centers. The child does not already have a developed neurological and linguistic system to draw on when there is a poor incoming signal. So yes, we need to consider FM systems for all children, which means having protocols for discussing the in-depth use of FM with the families. We sometimes think telling families to simply reduce or turn off "noise" will do it, but we also need to emphasize that soft noise is still noise: the dishwasher, the TV in the background, music unless it is adult-directed with purpose. Music is absolutely wonderful and critical, but only if it is adult-directed. Just having music playing might as well be noise. If music is the source of the conversation, however, it needs to be with focus, distinction and adult direction. The brain does need music and it is good to sing, sing, sing, but consider turning it off if it has no purpose.



The Audiologist as the Informer

Jane Madell: What information does the audiologist need to supply as a sensitive, meaningful, collaborative member of the team? We need ways of having immediate sharing of information. We may see patients and distribute our reports to physicians and other members of the child's team, but are we conscientious of how long it takes those reports to actually reach those other members? Especially for listening and spoken language specialists working with these children, that information is meaningful immediately. Certainly we have HIPAA to be mindful of, but surely there is a way that we can have immediate sharing of information among all team members. The functioning of technology and acoustic accessibility is absolutely critical information that everyone needs to have right away. The parents should take an audiogram home from every test, whether it is in a report format or informal like on a familiar sounds audiogram.

The audiologist needs to share information about diagnostic testing. Does the child hear at typical and soft levels in competing noise? There needs to be evidence and data documented about that. What are the thresholds while wearing technology for the left ear and right ear? We need to identify function with technology by speech perception and frequency-specific thresholds with each ear monaurally and then binaurally. Why is speech information so important? Pure tones do not help us determine how the child performs in daily activities; speech does. What access does their brain have for language and cognition and social cues? All of that is critical. If we do not test speech perception, we will not know what the child hears functionally and, more importantly, what they are not hearing.

We have to compare performance over time. Sometimes the speech perception drops if a child has a progressive hearing loss. Sometimes before their unaided pure-tone thresholds decrease we see a decrease in speech perception, which has to do with the status of the auditory system. This is beyond what the technology is doing. Therefore, we have to know if there something we can do to improve auditory function. Jane has this next section about information the audiologist needs from the family and interventionists.
In addition to giving information to the people who are working with the child, we also need to obtain information from them to help us figure out what it is we need to do. We want to know from the parents and from the interventionist how many hours a day the child is wearing the technology. Data logging is a convenient feature in current hearing aids that we can look at, but having subjective report is equally as important. Sometimes the parent says the child wears the hearing aid all day, and we look at the data log and it says only two hours a day. We need to know how often the child is wearing the hearing aid because that is critical information for evaluating progress.

We need to know what environments the child is in. That is how we justify what we need to do to change the technology. That is how we justify the need for FM, and we need to be able when we go into a daycare center not only to teach them how to turn the FM on, but also when to turn the FM off. If a child is in a one to one situation in a quiet room or if the person with the FM is talking to some other children or caregivers the child does not need the FM but if the talker is speaking to the group, the child does need the FM system.

We want to know if the child likes the hearing aids or cochlear implants. If technology is working well, a child should like it and want to wear it. If it is not working well, what is the problem? Do loud sounds bother him? And if so, what loud sounds? We can do aided threshold testing and see at which bands the child is hearing optimally and which ones might be too loud and then make adjustments to those frequencies.

Where is the child having problems hearing? Is he having trouble hearing in the car? Maybe FM will solve that problem. If the child is sitting in the car and takes his technology off, that is saying to us there is something about the car that is a problem. Maybe if we have an FM system and talk to him while he is in the car he will be able to use that time effectively to learn. Can the child tell you if the device is not working well? What kinds of problems are you concerned about?

We also want to know what the child is hearing. I recently had a family call me for a consultation, and their child was implanted bilaterally at 6 months of age. Now at 18 months neither mom nor the interventionist could tell me what the child was hearing. As it turned out, this child's amplification was set so loud that the amount of distortion was impeding his ability to acquire language. He had sound but it was not clear or meaningful. We fixed that quickly and he started developing words in a period of months. It is crucial that we know what exactly kids are hearing, and it is important to remember that real ear will not tell us that. Real ear testing will tell us how much sound is reaching the TM, but nothing further.

Looking at the auditory development hierarchy, is he starting with the things we expected him to begin with fundamentally, moving from prosody to lower frequency and to higher frequency perception, words into sentences? We want to know what his specific perceptions are. We want to look at phoneme perception and record what errors he might be making. If we see he is substituting /b/ for /g/, he may have too much low frequency information. We can look at the frequency allocation table and see the frequencies of the specific phonemes with which he is having problems and learn how to change the technology settings to improve perception.

Does the child hear soft speech and does he hear from a distance? Both of these are critical parts of learning. We want to look at the child's voice quality. If there is something unusual about the voice quality, chances are the child is getting a distorted signal. We want to know what the language development is. We expect kids to gain one year of language in one years time. If he is not doing that, something is wrong. Is she in the right therapy program? Are the parents working on what they need to at home? Is the child wearing the technology enough hours a day? What the concerns are of the people that are working with the child. Are they satisfied with the child's development?

We should also discuss interpretation of test results and management. I want to point you to an article that was published in Audiology Today (Madell, Batheja, Klemp, & Hoffman, 2011) about speech perception qualifiers. We tried to figure out what people thought were "good enough" speech perception scores for people with hearing loss and found there were people who thought that a score of 40% was good enough if you had a hearing loss. But 40% is not "good enough", especially if you are a child learning language; 40% will not provide you with enough information. It is important to realize that what is appropriate for people with normal hearing is also our goal for a child with a hearing loss. We may not always get it, but it has to be our goal. Excellent speech perception is 90 to 100%. Good speech perception is 80 to 89%, fair speech perception is 70 to 79% and poor speech perception is less than 70%. We should not be writing in our reports that a child has excellent speech perception when the speech perception is 64%, because that is definitely not excellent. I think we need to realistically say what the child's speech perception is, and then we need to figure out what to do to make it better.

We also need to talk about how everybody on the child's team works together. We need to be sharing information with everyone, and with an open mind. A parent is not a good person to disseminate negative information. If I, as the audiologist, think the therapist is not doing a good job, I cannot say to the mother, "Go and tell the therapist she is not doing a good job." We need to figure out a way to communicate with the therapist to say that based on your testing the child is not progressing, and what can we do together to make that work? When a therapist calls me and says, "I do not think he is hearing well enough. He is not getting /s/," I should not be saying to that therapist, "I set the hearing aid fine; it is your problem." Obviously we do not say it in that tone of voice, but that should not be what we are implying. We should be saying, "Okay, let's work together here and find out what is happening."

We need to listen to people's evidence. When a therapist tells me a child cannot hear the Ling sounds, or that he is not hearing soft speech and is having trouble in a classroom setting, what do I have to do to fix this? We get a report that a child cannot produce sibilants. Our first question has to be, is he hearing the sibilants? To assess this, I would want to do a speech perception task like the new Plurals Test out of the University of Western Ontario, which is a high-frequency perception task. If the child is hearing the sibilants and cannot produce them, that is one problem. But if he is not hearing the sibilants, I cannot expect him to produce them.

We have a toddler in a noisy daycare center. What do we have to do to make the situation acoustically accessible for this child? That may be an FM system, or maybe putting carpet down in the classroom so we can reduce some of the noise. We need to look at technology thresholds. If they are not sufficiently loud, with thresholds around 15dB or so, we need to reprogram or change the technology. Maybe this is the time to look at new hearing aids or move from hearing aids to a CI. Mayber we need to consider an acoustically-tuned earmold. Adding a horned or belled bore in an earmold will provide more high-frequency sound. A remote microphone, like FM, is not a substitute for well-programmed technology. We need to be sure that the technology is doing what it needs to do without assistive devices.

I know there are people who think aided testing is not a good idea, but let me say that I feel strongly that we should be doing aided threshold testing and I am not alone. (See Harvey Dillon, Hearing Aids, Thieme, 2012). If we start from a soft level and present a short stimulus, we will not activate the compression circuit of the hearing aid, and we will get the information that we need. We know that we get auditory brain access by using technology. We need to know how much auditory exposure a child has, and that the family knows how important wearing the technology full-time is. The family has to be reliable in telling us what the listening schedule is, how many hours a day the child is wearing it. What kind of environments is the child in, and where do they need FM? At school and at home? Do they need it in the playground?

How do the parents know what to do? Are they getting good directions from the clinician and audiologist? If the clinicians who are working with them do not give them homework to do at home, how can the parent possibly be directed in what to do? If therapy is happening without the parents even being present in the therapy room, how does the parent know what to do at home? We need to be looking at all of those things.

Carol Flexer: Let me say one more thing in terms of how critical it is for all of us to listen carefully to how the child sounds. We speak what we hear. That is why some people speak Japanese and some English. What we hear consistently determines what we say. If a child's choice of what they say or how they say it does not reach the expectations of being comparable to hearing peers, then their brain is not receiving that information. How the brain is organized and developed will determine what is coming out of that child's mouth. In the absence of severe pathology, such as lack of articulators, people speak what and how they hear.

As a summary, if we are not getting the outcomes that we expect, always suspect the equipment first. Even if they were just at the audiologist and everything worked well, it does not mean it is going to continue to work well. Who has not driven their car out of the dealership and it breaks down three blocks later? Equipment is great stuff, but it does not always work perfectly all the time. That should be our number one suspicion. We, therefore, need to immediately convey to all members of the team that the equipment is working well and that the environments allow information to reach the equipment.

As I mentioned earlier, because children's brains are organized around sound, children with hearing impairment will have brains similar to children with typical hearing, provided we give them early and relevant exposure. The argument that children do not want to wear their technology is hopefully an old conversation from populations that were not fortunate enough to have great acoustic access from the early days and weeks of life. We have to have that enriched auditory stimulation, and we do not want that brain put on a shelf even for a minute during the day. Thank you all for listening.



Question & Answer

What is your experience with frequency-lowering technology in children with sloping hearing losses?

Jane Madell: I find frequency-lowering works well for kids with sloping audiograms, and while frequency lowering may help, it does not work as well as cochlear implants for children with severe and profound hearing losses. But, it is something we need to think about.

Carol Flexer: Frequency lowering is not a substitute when you have severe to profound high-frequency hearing loss. Jane is correct in saying that often an implant will give you much better auditory access.

What are recommended resources for school administrators and building planners to aid the critical nature of auditory listening throughout the school environment?

Jane Madell: The book that is coming out by Smaldino and Flexer (2012) will have all the latest information regarding that. It will definitely be sold at the Academy meeting in March and it will contain a lot of the specific recommendations for building planners and schools to provide the acoustics that kids need in order to hear better in those situations.

Speech perception tests are often too difficult for very young children. Are there any new tests for children under age two?

Jane Madell: That is an excellent question. What I do for kids under age two is start off with talking to them with words that the families have told me they should know. The ESP, Early Speech Perception Test, is also available, and it has objects which can be used for testing kids younger than two. Although this is not a standardized test, I often ask families to bring in objects or toys that the kids use and know at home. You can start off with two things on the table. The child may not reach for it, but you can look for an eye gaze when you call a particular object, which a child will do if they recognize the object. You can add as many as four items for a child under two and be able to get speech perception information. At the clinic I have boxes of familiar objects and ask patients or parents to pick out four or five things the child knows. Alternate the objects and move them around as a way to identify what it is a child hears. It is not a standardized test. We need some standardized tests for children of this age. We can use familiar objects and make them into a test, and that will help us know what we need to know.

I have a patient with mild to moderate hearing loss who is involved in many sporting activities and has good speech and language. Should he be forced to wear his hearing aids while playing football or basketball?

Jane Madell: I know that Carol and I have the same answer to this question, and that is YES. First of all, I do not care for the word forced. Does he need to hear in those situations? Yes, he needs to hear. Here is the trick. Have the coach wear an FM microphone and the kid will be in heaven. He will hear things no one else hears.

Carol Flexer: For the child who does not want to wear his hearing aids in sports, I think it is important to get information and test how the child is hearing in noise and in quiet. We can get some basic information in sound room testing, and then we can also do some more subjective information there on the playing field, but as you mentioned, Jane, it is important to ask the child what is going on.



References

Dehaene, S. (2009). Reading in the brain: the science and evolution of a human invention. New York: Penguin.

Hart, B., & Risley, T.R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Brookes Publishing Company.

Madell, J., Batheja, R., Klemp, E., & Hoffman, R. (2011). Evaluating speech perception performance. Audiology Today, September-October, 52-56.

Moeller, M.P., Hoover, B., Peterson, B., & Stelmachowicz, P. (2009). Consistency of hearing aid use in infants with early-identified hearing loss. American Journal of Audiology, 18, 14-23.

Killion, M.C., & Mueller, H.G. (2010). Twenty years later: a NEW count-the-dots method. The Hearing Journal, 63(1), 10, 12-14, 16-17.

Pittman, A. (2008). Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. Journal of Speech, Language, and Hearing Research, 51, 785-797.

Smaldino, J.J., & Flexer, C. (2012). Handbook of acoustic accessibility: best practices for listening, learning, and literacy in the classroom. New York: Thieme.

Cite this article as:

Madell, J., & Flexer, C. (2012, September 24).  Beyond ANSI standards: Acoustic accessibility for children with hearing loss.  AudiologyOnline, Article 11135.  Retrieved from https://www.audiologyonline.com

Phonak Infinio - December 2024

jane madell

Jane Madell, PhD, CCC-A/SLP, LSLS Cert AVT

Director of Pediatric Audiology Consulting

Dr. Jane Madell is Director of Pediatric Audiology Consulting. She has been a pediatric audiologist for more than 40 years. Dr Madell is a certified audiologist, speech-language pathologist, LSLS and auditory verbal therapist. Dr Madell’s clinical and research interests have been in the area of evaluation of hearing in infants and young children, management of hearing loss in children with severe and profound hearing loss, selection and management of amplification including cochlear implants and FM systems, assessment of auditory function, and evaluation and management of auditory processing disorders. Dr Madell has published 3 books, 16 book chapters, and numerous journal articles and is completing a 4th book. She is a frequent presenter at professional meetings.  none


carol flexer

Carol Flexer, PhD, CCC-A, LSLS Cert. AVT

The University of Akron and Northeast Ohio Au.D. Consortium & Listening and Spoken Language Consulting

Dr. Carol Flexer received her doctorate in audiology from Kent State University in 1982. She was at The University of Akron for 25 years as a Distinguished Professor of Audiology in the School of Speech-Language Pathology and Audiology. Special areas of expertise include pediatric and educational audiology. Dr. Flexer continues to lecture and consult extensively nationally and internationally about pediatric audiology issues. She has authored numerous publications and co-edited and authored ten books. Dr. Flexer is a past president of the Educational Audiology Association, a past president of the American Academy of Audiology, and a past-president of the Alexander Graham Bell Association for the Deaf and Hard of Hearing Academy for Listening and Spoken Language.



Related Courses

Maximizing Outcomes for Children in Schools: The Responsibility of Clinical Audiologists
Presented by Jane Madell, PhD, CCC-A/SLP, LSLS Cert. AVT, Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
Recorded Webinar
Course: #30088Level: Intermediate1.5 Hours
Many school districts no longer have educational audiologists. Students with hearing loss continue to need all the services that educational audiologists have provided. Clinical audiologists now need to pick up this slack if their young patients with hearing loss are going to succeed in today’s challenging academic environment. This session will discuss contemporary audiological needs of children with hearing loss in schools, how clinical audiologists can help meet those needs, and how to network with schools from a clinical setting.

The Pivotal Role of Professionals with Families: Why Family Support is Needed and How to Integrate it Into Your Practice, in partnership with AG Bell
Presented by Gayla Guignard, MA, CCC-A/SLP, LSLS Cert. AVT, Julie Swaim, BA, Jane Madell, PhD, CCC-A/SLP, LSLS Cert. AVT
Recorded Webinar
Course: #35647Level: Intermediate1 Hour
Having a child with a disability causes significant family stress. The course will help clinicians learn how to assist families in managing stress related to their journey in managing stress for children with hearing loss.

Complex Pediatric Cases
Presented by Jane Madell, PhD, CCC-A/SLP, LSLS Cert. AVT, Joan Hewitt, TOD, AuD, CCC-A
Recorded Webinar
Course: #33733Level: Intermediate1 Hour
By investigating case studies, this course will provide audiologists with different ways to think about complex pediatric audiology patients. It will look at various performance measures and discuss how to use clinical information to plan, implement, and validate treatment.

Deafness with Autism: A School Age Communication Perspective
Presented by Michelle Leach, MS, CCC-SLP
Recorded Webinar
Course: #36764Level: Intermediate1 Hour
Deafness and autism both affect communication, but deafness is frequently identified before autism. Learn to identify the differences in communication development when a dual diagnosis may exist, proper referral channels, and intervention strategies to promote listening and communication skills in the school-age child.

School Audiology and Community Audiology Partnerships
Presented by Gail Whitelaw, PhD
Recorded Webinar
Course: #30988Level: Intermediate1 Hour
This course will focus on the critical partnership between educational/school audiology and community audiology services. Issues that maximize educational and communication outcomes for school-aged children will be highlighted.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.