AudiologyOnline Phone: 800-753-2160


Shoebox - Learn More - January 2024

Interview with Brett A. Martin, Ph.D., Associate Professor, CUNY Graduate Center

Brett A. Martin

September 28, 2009
Share:

Topic: Testing the Acoustic Change Complex, and Other Research Underway in the Program in Speech-Language-Hearing Sciences at the CUNY Graduate Center


Brett A. Martin, Ph.D.

Dr. Carolyn Smaka: This is Carolyn Smaka from AudiologyOnline, and today I am speaking with Dr. Brett Martin, an associate professor at the Graduate Center of City University of New York (CUNY). Dr. Martin, welcome to AudiologyOnline. Can you start by telling us about your research lab at CUNY?

Dr. Brett Martin: Of course. My lab is the Audiology and Auditory Evoked Potentials Laboratory, and it's one of several labs in the department. My lab includes equipment set up for multi-channel evoked potential recording and analysis, as well as behavioral audiologic testing.

Smaka: How many research projects are currently under way in your lab?

Martin: More than I can count on one hand;however, my interests have really focused on speech-evoked potentials for the last few years. My goal is to develop a clinically feasible tool to evaluate speech perception capacity that is appropriate for infants and young children with normal hearing and with hearing loss.

Toward this goal, I've been using the obligatory cortical evoked potential, the P1-N1-P2 complex, or it's analog in children, as an index of discrimination. Traditionally, the P1-N1-P2 complex has been used to index cortical processing of transient sound onset, and the main application for audiology has been for audiometric threshold estimation. I am not using it for this. Instead, I am using it to index the capacity, given intact higher centers, of the auditory cortex to process acoustic change within a sound stimulus, such as speech.

For example, when I present the stimulus "ooh-eee," which contains a change of the second formant frequency at mid-point, there is a P1-N1-P2 complex elicited by the onset of the stimulus, another elicited by the acoustic change from "ooh" to "eee," and then a third elicited by sound offset. I'm interested primarily in the evoked potential elicited by the acoustic change at stimulus mid-point. For clarity, I just refer to this response as the "Acoustic Change Complex," or ACC.

To date, my colleagues Drs. Arthur Boothroyd and Jodi Ostroff and I have shown that the ACC can be obtained to changes of spectrum, amplitude, and periodicity, alone or in combination. In other words, it's elicited by all of the kinds of acoustic change that are important for speech perception.

I have also shown that the ACC has good test-retest reliability for both adults and children, which is a positive finding for the potential clinical application of the ACC.

In addition, I recently completed a project together with Drs. Arthur Boothroyd, Tiffany Berth, and Dassan Ali that I'm very excited about. In order for the ACC to be a feasible tool in the clinic, it is important to obtain maximum information in minimal time, and this project addressed this need. The purpose was to compare four strategies of stimulus presentation to assess their efficiency when generating the ACC in adults and children aged 6-9 years.

The basic idea was to determine whether the silent interval between stimuli could be eliminated in order to save testing time. The potential catch was that the increased rate of presentation could result in decreased response amplitudes due to neural refractoriness, particularly in children. This might cancel any advantage in terms of test time and signal-to-noise ratio.

In the first strategy, the stimulus, "ooh-eee" was presented every two seconds. In the second strategy, I took away the silent period between the stimuli, so that the stimulus alternated continuously between "ooh" and "eee," and I call that the alternating strategy. The latter strategy also doubled the number of acoustic changes presented, because there were now two directions of change: "ooh" to "eee" and the reverse, "eee" to "ooh."

Then I had two additional strategies that served as controls;for the first, I just reduced the inter-onset interval between stimuli to one second, speeding things up, and this controlled for the faster stimulus rate for the alternating condition. In the other, I used the stimulus "ooh-eee-ooh" and presented that every two seconds. That controlled for the additional direction of acoustic change in the alternating condition.

In the end, what we found is that the second strategy, the alternating strategy, was the most efficient for both adults and children. That is, despite eliminating the silence between stimuli and doubling the acoustic changes presented in a given time period, there was no serious reduction in response amplitude.

Although this finding is limited to adults and children six years and older, it will be interesting to see if a similar pattern is obtained in infants, which is a project that I'm currently working on in conjunction with Lisa Goldin, one of the students in the lab.

Smaka: Is it a challenge to find infant research subjects?

Martin: It is one of the most challenging things about that type of research. Once the kids are about two years old, it's not as challenging, because by then parents are more comfortable with letting their children participate. Getting the younger kids is definitely harder;it just takes a consistent effort and consistent follow-through to actually get those kids in. It's a slow process, but eventually they do come.

Smaka: Would you see testing the ACC having eventual application to both normal hearing subjects and those with hearing loss?

Martin: Definitely, work in the lab is moving toward clinical application. We have studies looking at the maturation of the ACC, assessing the effect of degraded listening conditions, and looking at applying testing to various clinical populations: sensorineural hearing loss, auditory processing disorders, and those using hearing aids and/or cochlear implants.

Another issue that I'm currently working on with Simon Henin, a student in the department, as well as Dr. Lucas Parra, who is in the biomedical engineering department at City College, is an algorithm to cancel cochlear implant artifacts from evoked potential recordings. This is a very specific task that we are working on in order to help move this from the laboratory to the clinic for individuals with cochlear implants.

Smaka: What challenges have you faced with some of these projects?

Martin: I think the challenges I've encountered are common to any research lab. In addition to challenges getting the youngest subjects into the lab as we talked about, other challenges are the steep learning curve for new students and the fact that there are just not enough hours in the day to get everything accomplished.

Smaka: I'm sure it varies, but what is the typical timeframe for one of your studies from start to finish?

Martin: It really does vary. We can do an adult study relatively quickly because adults are easier to recruit;therefore, we can get an adult study finished in a couple of months. In contrast, due to recruitment difficulty, the studies that we do with young children can go on for a couple of years even.

I actually just finished a multi-year grant from NIH-NIDCD for a study titled, "ERP Measures of Auditory Processing", which looked at ACC maturation and test-retest reliability from birth up through adults. That study did, indeed, take multiple years to complete.

Smaka: Other than NIH and NIDCD with this grant, what other sources of funding do you have for your research?

Martin: I have some internal PSC-CUNY funds, which stands for Professional Staff Congress of the City University of New York. In addition, one of the students in the lab, Christine Rota-Donahue, just received an award from the Long Island Speech and Hearing Association, which was a nice surprise for her to receive help to fund her doctoral studies.

Smaka: Can you talk about the faculty at the Graduate Center?

Martin: Within our department, the Speech-Language-Hearing Sciences Department, we have seven central faculty members, and each faculty member has his or her own lab. We've already discussed my lab, the Audiology and Auditory Evoked Potentials Laboratory. Dr. Glenis Long is the other faculty member in hearing, and she runs the Hearing Science Lab, in which she does research in otoacoustic emissions and psychoacoustics to probe cochlear function. She has a Tucker Davis system for distortion product otoacoustic emissions and many other pieces of equipment in there. She is currently working on developing a new way to measure DPOAEs and a way to better evaluate efferent function. She's also doing some collaborative projects looking at changes in cochlear non-linearity in tinnitus patients as well as doing some work with infants.

Dr. Valerie Shafer runs the Developmental Neurolinguistics Lab. She is a linguist interested in language development and developmental language disorders, and her research aims to look at the relationship between language and brain development. She uses both traditional behavioral approaches and electrophysiological approaches to studying language. Therefore, she also has some electrophysiology equipment in her lab, but she does a lot of standard behavioral testing, and she approaches all of her work from a linguistic perspective.

Dr. Richard Schwartz runs the Developmental Language Lab, and the research in his lab aims to look at the nature and underlying causes of childhood language impairments. Studies in his lab examine the relationship between speech perception, the processing of language, and the brain mechanisms underlying language processing and production in young children acquiring language, both typically and atypically. He is particularly interested in children with specific language impairments and those using cochlear implants.

Valerie, Richard, and I actually have a paper in the pipeline showing that children with specific language impairment often show a small Ta component of the evoked potential T-complex, along with atypical waveform morphology. I never thought I would be doing research in language, because I'm a hearing person, but because of the group of faculty that we have, I was able to collaborate and do such a study. The CUNY Graduate Center is a very exciting place to be, because there are so many different people with different backgrounds approaching the same questions in different ways. It is a great opportunity to be able to collaborate and work together.

Dr. Loraine Obler is in charge of the Neurolinguistics Laboratory, and a lot of her work focuses on the areas of aging, aphasia, and bilingualism.

Dr. Winifred Strange, unfortunately, is going to be retiring at the end of the year, but she runs the Speech Acoustics and Perception Laboratory. Her research interests include co-articulation, across language differences and speech production and perception, and the perception of speech and meaningful non-speech environmental sounds. She and I are actually going to be teaching a course together this semester on speaker-listener adaptation in non-optimal communication settings.

Lastly, the newest person on the faculty is our new executive officer, Dr. Klara Marton. I believe her research interests include language and executive function.

Smaka: Have any recent research projects been particularly rewarding or memorable for you?

Martin: I think my recent studies with infants and toddlers have been particularly rewarding, just because the challenges are different than when testing older subjects. For instance, getting good impedances and obtaining reasonably clean evoked potential recordings from awake infants and toddlers is such a challenge. It's more of an art than a science, and I'm happy to say that I think my lab has mastered the art of obtaining good recordings from these populations. This is a good thing as in addition to following up the efficiency study that I already mentioned, I have three other ACC studies underway that include infants.

Smaka: At this point, I am sure you could write a how-to clinical paper on the techniques for testing infants. You mentioned Christine's award and your NIH-NIDCD grant - has your lab received any other special achievements or had any recent publications?

Martin: Well, I was very pleased to have a paper that I wrote along with Drs. Kelly Tremblay and Peggy Korczak mentioned in The Hearing Journal's Annual "Best Of" Series. That paper was called Speech Evoked Potentials from the Laboratory to the Clinic, and it was published in Ear and Hearing.

I also did a joint presentation with Dr. Jay Hall at AAA this year, which was titled "Auditory Evoked Response Advances: From the Laboratory to the Clinic."

Smaka: Dr. Martin, I look forward to reading your future publications on ACC testing. Thank you so much for your time today. If readers would like further information, they can visit www.gc.cuny.edu or web.gc.cuny.edu/speechandhearing/labs/aaepl/indexaaepl.htm

Martin: Thank you. It's been a pleasure talking with you.
Phonak Infinio - December 2024


brett a martin

Brett A. Martin

Associate Professor, CUNY Graduate Center



Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.