AudiologyOnline Phone: 800-753-2160


InnoCaption - Connected - July 2024

Cochlear Implant Technical Issues: Electrodes, Channels, Stimulation Modes and more

Cochlear Implant Technical Issues: Electrodes, Channels, Stimulation Modes and more
Aravind Namasivayam
June 21, 2004
Share:
Graduate Department of Speech-Language Pathology,
University of Toronto, Toronto, Ontario


Introduction

Cochlear Implants (CI) exist at the crossroads of biology and technology. Research and development of CIs has been conducted for over 30 years. The goal has always been to bypass damaged inner ears (hair cells) and optimally transfer auditory information to the brain.

A CI is an electronic device that restores hearing to severely and profoundly hearing impaired adults and children. It's components can be bifurcated into the internal and external parts. The internal components; the receiver-stimulator, and the electrode, are surgically placed behind the ear, and the electrode is placed within the cochlea, respectively. The external components are worn much like a hearing aid, and consists of a microphone, processor and transmitter.

Acoustic signals are received, processed and transmitted to the internal component (receiver-stimulator) transcutaneously and are then delivered as electrical signals via the electrode array directly (bypassing the haircells) to the auditory nerve dendrites and the spiral ganglion cells. Finally these stimulations are interpreted as "meaningful" sound by the auditory central nervous system and the brain.

There are several CI research centers striving to achieve optimal transfer of acoustic information to the brain via the CI. The single goal of the professionals, the patients and the manufacturers is to achieve the best possible speech understanding for people with hearing loss.

This paper briefly summarizes the interaction between electrode variables, stimulation mode and speech coding strategies, all of which are important to achieving optimal transfer of acoustic information to the brain. Further, we'll discuss electrode design and speech coding strategies and how these variables can be adjusted to suit damaged cochleae.

Electrodes

Each CI has an electrode array (typically a series of tiny metal rings). These electrodes electrically stimulate auditory nerve endings to create sound sensations. The cochlea is differentially sensitive to sound frequencies. Maximum sensitivity to low frequencies occurs at the apex and maximum sensitivity for high frequencies occurs at the base. Therefore, stimulating electrodes at different regions along the length of the cochlea produces percepts of different pitches (frequencies). The number of electrodes in a given implant depends on the manufacturer.

Channels

Channels can be thought of as the output of filter banks. Theoretically, each channel can be made to correspond to a single electrode (1-to-1 mapping). However, this has not been applied in practice because closely spaced electrodes may not produce truly different sound percepts. In other words, 2 or more electrodes may stimulate the same auditory nerve ending and produce only one pitch percept. Thus, too many electrodes may lead to redundancy between electrodes, and decrease their ability to deliver distinct sound frequencies. However, having more electrodes available may offer a different kind of benefit. Sometimes, regions of the cochlea may not be responsive to electrical stimulation. If we had more electrodes, one could stimulate adjacent responsive regions using a different subset of electrodes. Hence, having more electrodes would be theoretically beneficial in programming CIs in the presence of problem locations within the cochlea.

Stimulation Modes

Another reason there is no "1-to-1" mapping between channels and electrodes is that the number of available electrodes for stimulation depends on the mode of stimulation used. There are 2 fundamental modes of stimulation employed by CIs to deliver the electrical current to the auditory nerves. The first is monopolar and the second is bipolar.

In the monopolar stimulation mode there is one active electrode and one return electrode (also called the ground electrode located in the implant package) and current flows between the two electrodes. Since the active and return electrodes are widely spaced the current spreads over a wider area stimulating a larger neuronal population. In the bipolar mode, 2 adjacent electrodes are paired as active and return and stimulation is tightly focused on a small population of auditory nerve fibers. Thus, 2 electrodes provide only one channel between them. For example, a 16 electrode system would deliver only 8 channels, and similarly a 24 electrode system would deliver 12 channels in the bipolar configuration.

There are distinct advantages to using each stimulation mode. For example one of the factors that determines loudness of sound is the number of neurons stimulated. In the monopolar mode, since there is a spread of current over a large number of neurons it can achieve higher loudness levels with lower current, in comparison to the bipolar mode. However, in the bipolar mode, stimulation of electrodes in close proximity provides more spatially selective stimulation than monopolar mode (see Osberger and Fisher, 1999). Thus, CIs can deliver monopolar and bipolar modes of stimulation, and the selection of either is dictated by individual needs and responses (ability to produce adequate loudness with low current levels) and stimulation strategy used (simultaneous or non-simultaneous - please see section on Electrode design in multichannel CIs and anatomy of the cochlea).

Single Channel and Multichannel

To maximally transfer acoustic information to the brain via electrical signals, all CI systems need to closely adhere to the ''natural'' laws of signal transfer, as demonstrated and employed by the normally functioning cochlea. That is, researchers must consider how best to code speech signals electrically, in a way that will make best use of the cochlea's anatomic organization and mimic the natural processes occurring within the auditory system.

The information contained in the speech signal can be grossly divided into intensity and frequency subcomponents. Each of these can be electrically transferred via the CI to the brain. Intensity coding is achieved by manipulation of electrical current pulse width, pulse height and by the quantity of auditory nerve fibers stimulated in the cochlea. For example a bipolar (BP) +1 mode (active and reference electrodes separated by one non-active electrode) of stimulation will sound louder than a BP mode, given the same level of current applied, because more nerve fibers will be stimulated in the former case. Frequency coding depends on the rate of nerve firing (temporal theory) and the place of stimulation along the length of the cochlea (tonotopic or place theory).

CI manufacturers have developed 2 basic types of CIs; single channel and multichannel devices.

Single channel CIs code frequency based on the rate of firing of electrical pulses. Multichannel CIs use the place theory strategy for coding frequency, wherein different frequencies from the auditory signal are separated and presented in a tonotopic manner along the length of the cochlea via the electrode array. The place theory strategy of coding is currently utilized by most CI manufactures.

Although the use of multichannel CI devices is currently prevalent around the world, there are some arguments against the rationale for the use of such devices. According to House (1995)1, one of the pioneers of CIs, the cochleae of people with severe hearing impairment do not posses sufficient residual neuronal population to use tonotopic electrical stimulation through multiple electrodes. House (1995) referred to a study by Linthicum, Fayad, Otto, Galey and House (1991) wherein most of the 16 temporal bones they examined (which had been implanted) had few or no basilar membrane dendrites. Therefore (he reasoned) if the multichannel electrode systems aim to stimulate focal regions of dendrites along the basilar membrane, which may not exist in some/most cases, then the rationale for using tonotopic stimulation strategies is poorly argued.

Further, the idea that multiple electrodes may still tonotopically stimulate the spiral ganglion cells within the Rosenthals' canal in the modiolus has been refuted by House (1995). House (1995) argued "...the spiral ganglion cell bodies are closely packed, and are surrounded by blood vessels and fluid spaces. The electrolytic fluid of the scala tympani surrounds the active electrodes. This fluid seems ideally suited to conduct electric fields generated by the CI widely and generally. Under these circumstances it would seem completely illogical to assert that a current is somehow finding its way to, and stimulating only certain discrete numbers of spiral ganglion cells in exclusion to any others" (p.20).

However, if this argument (above) were to be true, there should be no difference in speech perception between single and multichannel devices. This clearly is not the case. In fact some multichannel devices have better speech understanding scores in auditory alone condition (e.g. 72% of CID sentences -Clarion HiFocus system2 and greater than 80% for open-set CID sentences with Nucleus -SPEAK strategy - Clark, 1997) than combined auditory and lip reading scores with single channel devices (e.g. AllHear Inc, website Monograph, 1995 findings).

Moreover, temporal coding of frequency (typically used in single channel devices) is limited in its application in CIs, since pitch sensations for pulses greater than 200-300/sec are not adequately discriminated (Clark et al., 1972). Furthermore, Clark (1969, 1970) demonstrated that though temporal coding of frequency information could be reproduced with electrical stimuli it failed to elicit sustained neural responses in electrodes placed close to the brainstem auditory neurons.

From these findings (speech perception scores and electrophysiological studies), it can be argued that single channel CI systems with temporal coding of frequency do not adequately convey speech information, whereas multichannel CIs do provide sufficient tonotopic stimulation of the cochlea and better speech understanding than single channel devices.

Electrode design in multichannel CIs and anatomy of the cochlea

Each manufacturer has adopted a different approach to the design of their electrode array in their CIs. Nevertheless, all CI manufacturers strive to optimally stimulate the spiral ganglion cells in the modiolus. The spiral ganglion cells lie close to the spiral lamina for the first 25mm of the cochlea (Marsh et al., 1993). Based on this anatomical constraint, designers of implant electrode arrays have developed curved electrode array configurations, allowing the electrode to lie closer to the modiolus. The theoretical advantage of placing the electrode closer to the modiolus is that the spiral ganglion cells can be stimulated with lower current levels thereby increasing battery life, and in some cases, this may improve loudness growth without having to use high current levels.

Alternatively, a way to place the electrodes closer to the spiral ganglion cells and facilitate stimulation with lower current levels would be to increase the thickness of the electrode array. However, increasing the thickness of the electrode array increases the risk of trauma during insertion and removal (if ever required). It is therefore preferable to use thinner electrode arrays, so removal and reimplantation is feasible, if required (Miyamoto, 1996). However, when there is an implant failure, surgeons sometimes prefer to implant the previously non-implanted ear, rather than reimplanting the same ear.

The length of the human cochleae is another important factor considered by CI manufacturers when designing electrode arrays. Human cochleae are about 20-30mm in length with nerve endings and spiral ganglion cells spanning most of this distance. CI designers manufacture electrode arrays to match this length. Some electrode arrays are designed to be inserted up to 30mm into the cochlea, whereas others can be inserted up to 25mm. However, deeper electrode array insertion does not necessarily mean better hearing as there is limited benefit of an extended electrode array length, since the spiral ganglion cells become more distant from the scala tympani as we proceed from the base to the apex of the spiral cochlea (i.e. the spiral ganglion cells are more oblique beyond the first 25 mm and lose frequency specificity). Thus, electrically stimulating the cochlea beyond the first 25mm may result in diffuse stimulation of the underlying spiral ganglion cells and may lead to poor pitch discrimination.

Regardless, the length of the human cochleae determines the maximum allowable length of the electrode array. There is a general relationship between the length of the electrode array, number of electrodes in the array, number of channels and mode of stimulation (as previously discussed). In general having more electrode sites serves two purposes. First, since the auditory nerve fibers in the cochlea are tonotopically arranged more electrodes provide better frequency resolution, and second, if there are any regions in the cochlea that respond poorly or are unresponsive to electrical stimulation, these regions can be avoided while programming the device.

Although the number of electrodes proportionally increases flexibility of electrode programming, spacing them too closely could cause serious sound distortions due to channel interaction (when adjacent electrodes are stimulated simultaneously especially in monopolar mode, the stimulation from one electrode interferes with the stimulation of the other electrode- Wilson et al., 1991) and might also result in poor loudness growth. Adequate loudness growth might not be achieved (Battmer, Zilberman, Haake, Lenarz, 1999) if electrodes are spaced too closely and are stimulated in BP mode. This arrangement of bipolar electrodes spaced closely results in a restricted electrical field that might not be sufficient to ensure loudness growth even at the upper limits of current level used for stimulation. Therefore, spacing between electrodes is also an important factor while designing an electrode array

The channel interaction (between 2 adjacent electrodes) is not an issue when sequential stimulation is used, or when there is the option of changing stimulation mode. When sequential stimulation is used, only one electrode is stimulated at any time, thus eliminating the possibility of channel interaction and subsequent problems. The basic difference between the simultaneous and non-simultaneous (sequential) strategies is that in the former electrical stimulation is simultaneously delivered to multiple electrodes at the same time whereas in the latter only a single electrode site delivers stimulation at any moment in time.

Speech coding strategies

In general, coding strategies capture as much information as possible about different environmental sounds and speech, within the audible range, and transfer the information to different sites on the cochlea. To enhance speech understanding, critical parameters of the speech signal are coded.

Earlier coding strategies were based on the premise that the "formants" (peak energy resonances of the vocal tract) were the key to speech perception. Therefore, formants (first formant, second formant and fifth formant) in various combinations were extracted (formant extraction strategy- F0/F1; F0/F1/F2; F0/F1/F5; F0/F1/F2/F5; F1/F2 and so on) and presented on place coding basis and at the same time the F0 (fundamental frequency) was rate coded. However, speaker-to-speaker variations in formant frequencies technically resulted in "filter spillovers" and modest speech perception scores (although, some CI patients using WSP and MSP {formant extraction} processors received enough auditory information to even use the telephone).

Other implant makers designed "compressed analog" strategies wherein filter bands split the acoustic information into different bands while preserving the amplitude relationships across the bands and delivered the same to the electrodes. In compressed analog strategies, all electrodes were stimulated simultaneously. This gave rise to amplitude and frequency interactions between the channels (channel interactions), which occasionally resulted in decreased speech understanding (Wilson et al, 1991). Obviously, this was not an issue with single channel systems. Channel interaction and loudness growth problems are (as discussed earlier) issues only with multichannel devices using simultaneous analog stimulation, especially in the monopolar mode. Further, in "compressed analog" strategies, because all electrodes are simultaneously stimulated, power consumption was prohibitively high.

Some speech coding strategies are also based on the natural neurophysiological principle of "stochastic neural firing". Neurons fire non-synchronously in the natural auditory system. Stimulating neurons with slow rate of pulses induces unnatural neural synchronicity not present in the natural auditory system. Coding strategies such as the CIS (Continuous Interleaved Sampling) now used by most implant manufacturers and other high rate coding strategies use higher rates of stimulation, but deliver the same to relatively fewer "fixed" sites. As a consequence, information on a temporal domain is updated relatively quickly. Improved consonant recognition performance has been demonstrated using high rate (833 pps) "n-of-m" strategies (Lawson et al., 1996). Similar benefits of using higher stimulation rates (CIS strategy) have been demonstrated by other researchers (Kiefer et al., 1996; Loizou et al., 1997; Brill et al., 1997).

Although, having higher stimulation rates may be advantageous in some cases, stimulating "fixed" sites confines the anatomical site of stimulation to the physical location of the electrodes within the cochlea. In some cases this "fixed" location may not be the most responsive to electrical stimulation since the density and uniformity of neuronal population may vary with the type of pathology that caused the deafness. Moreover, number of sites available for "fixed" stimulation is dependent on the insertion depth of the electrode array at the time of implantation. These variables reduce the programming flexibility and may limit performance potential in some individuals.

Simultaneous and partial simultaneous strategies are based on the rationale that simultaneous stimulation of the cochlea mimics natural auditory processes (as discussed earlier). Simultaneous Analog Stimulation (SAS) generates digitally reconstructed analog waveforms and delivers it simultaneously along the electrode sites in the cochlea at relatively high rates. Paired Pulsatile Sampler (PPS) on the other hand is an interleaved sampler strategy which uses an envelope extraction paradigm called "bin averaging", and stimulates two channels (spaced a few electrodes apart) at a time so that a faster repetition rate can be achieved with minimum channel interaction. PPS is always used in monopolar coupling mode and as with all monopolar strategies requires lesser current levels for stimulation.

Thus, evolution of coding strategies represents a change in scientific thinking based on cumulative research regarding how the human cochlea works. All the attempts made thus far to code speech signals and other environmental sounds have been guided by research directed at identifying how the cochlea codes information in normal and pathological ears and in doing so CIs have moved many steps closer to mimicking the natural auditory system.

Conclusion

With more than three decades of research in cochlear implants, manufacturers/researchers and specialists involved in the development of these unique devices strive towards overcoming the ''electroneural bottleneck'' (interface between acoustic signals and auditory central nervous system) and endeavor to develop smaller, faster and more intelligent cochlear implant systems.

Although, we have witnessed great advances in CI technology, I believe many more are still to come. Importantly, in the final analysis, responsibility to maximize the outcomes of cochlear implantation rests with the clinicians -- to use the features and flexibility available in these devices to optimally program them and maximize patient benefit.

The information presented in the paper has been geared to enable the clinician to better understand the variables involved in CI technology.

Acknowledgement

My sincere thanks to Dr. Carla Johnson and an anonymous reviewer for providing comments on an earlier draft of the manuscript.

References

Battmer, R. D., Zilberman, Y., Haake, P., and Lenarz, T. (1999). Simultaneous Analog Stimulation (SAS)-Continuous Interleaved Sampler (CIS) pilot comparison study in Europe. Annals of Otology, Rhinology and Larygology, Suppl 177, 108 (4), 69-73.

Brill, S., Gstottner, W., Helms, J., von Ilberg, C., Baungartner, W., Muller, J. and Kiefer, J. (1997). Optimization of channel number and stimulation rate for the fast CIS strategy in the COMBI 40+. American Journal of Otology, 18, 106-106.

Clark , G.M. (1969). Responses of cells in the superior olivary complex of the cat to electrical stimulation of the auditory nerve. Experimental Neurology, 24,124-136.

Clark, G.M. (1970). Middle ear and neural mechanisms in hearing and the management of deafness. Ph.D Thesis, University of Sydney.

Clark, G. M. (1997). Cochlear implants. In H. Ludman and Wright (Eds.), Diseases of the ear. London: Edward Arnold.
Clark, G.M., Nathar, J.M., Kranz, H.G., Maritz, J.S. (1972). A behavioral study on the electrical stimulation of the cochlea and central auditory pathways of the cat. Experimental Neurology, 36, 350-361.

Kiefer, J., Muller, J., Pfennigdorff, T., Schon, F., Helms, J., von Ilberg, C., Baungartner, W., Gstottner, W., Ehrenberger, K., Arnold, W., Stephan, K., Thumfart, W., and Baur, S. (1996). Speech understanding in quiet and in noise with the CIS speech coding strategy (Med-El Combi 40) compared to the Multipeak and Spectral Peak strategies (Nucleus). Journal for Oto-Rhino-Laryngology, 58, 127-135.

Lawson, D., Wilson, B. S., Zerbi, M. and Finley, C. (1996). Speech processors for auditory prostheses. NIH Project N01-DC-5-2103, Third Quarterly Progress Report.
Linthicum, F. H., Fayad, J., Otto, S. R., Galey, F. R., and House, W. F. (1991). Cochlear implant histopathology. American Journal of Otology, 12 (4), 245-311.

Loizou, P., Graham, S., Dickins, J., Dorman, M. and Poroy, O. (1997). Comparing the performance of the SPEAK strategy (Spectra 22) and the CIS strategy (Med-El) in quiet and in noise. Abstracts of 1997 Conference on Implantable Auditory Prostheses.

Marsh, M.A., Xu, J, Blamey, P. J., Whitford, L., Xu, S. A., Abonyi, J. and Clark, G. M. (1993). Radiologic evaluation of multichannel intracochlear insertion depth. American Journal of Otology, 14 (4), 386-391.

Miyamoto, R.J. (1996). Cochlear reimplantation. 3rd European symposium on pediatric cochlear implantation. Abstract 66, Hannover.

Osberger, M.J. and Fisher, L. (1999). SAS-CIS preference study in postlingually defened adults implanted with clarion cochlear implant. Annals of Otology, Rhinology and Larygology, Suppl 177, 108 (4), 74-79.

Wilson, B. S., Finley, C. C., Lawson, D. T., Wolford, F. D., Eddington, D. K., and Rabinowitz, W. M. (1991). Better speech recognition with cochlear implants. Nature, 35, 236-238.
Phonak Infinio - December 2024

Aravind Namasivayam



Related Courses

Singing in the Rain: Using Music to Reinforce Listening (Professionals)
Presented by Chris Barton, MM, MT-BC
Text/Transcript
Course: #22785Level: Intermediate1 Hour
No CEUs/Hours Offered
Presented by Christine Barton, this course provides participants with additional approaches to using music for spoken language development in young deaf and hard of hearing children. A special emphasis on the use of songs that focus on listening and language development while a young child is playing in the water will be provided.

Best Practices for Cochlear Implant Candidacy: Pediatrics, in partnership with American Cochlear Implant Alliance
Presented by Denise Thomas, AuD, CCC-A, Lindsay Zombek, MS, CCC-SLP, LSLS Cert. AVEd
Recorded Webinar
Course: #33025Level: Intermediate1 Hour
This course examines both the Food and Drug Administration and best-practice candidacy assessment practices for pediatric cochlear implantation. Medical, audiology, speech-language pathology, and other assessments and considerations will be discussed to help identify best practices for candidacy determination.

Implementation of Cochlear Implants: Enhanced Candidacy Criteria and Technology Advances
Presented by J. Thomas Roland, MD Jr.
Recorded Webinar
Course: #37377Level: Intermediate1 Hour
The participant in this course will understand the extended candidacy criteria with cochlear implantation and expectations. The course will cover implanting under age one, hybrid hearing with cochlear implantation, CI under local anesthesia, single-sided deafness, cochlear implantation, and auditory brainstem implantation.

Using GSI for Cochlear Implant Evaluations
Presented by Joseph Dansie, AuD
Recorded Webinar
Course: #39682Level: Introductory1 Hour
This course is designed to educate audiologists on the practical workflow for patients who require cochlear implants. From Food and Drug Administration (FDA) approved indications and Medicare requirements to pre-op and post-op evaluations, audiologists will gain a clear understanding of the cochlear implant process.

Adult Assessments in Hearing Healthcare: Working Across the Continuum
Presented by Camille Dunn, PhD, Susan Good, AuD, MBA, Alejandra Ullauri, AuD, MPH, Ted McRackan, MD, MSCR, Donna L. Sorkin, MA, Rene Gifford, PhD
Recorded Webinar
Course: #38660Level: Intermediate5 Hours
This five-course series on adult assessments in hearing health is intended to stimulate collaborative approaches for hearing health professionals, regardless of what hearing technologies they typically provide. Ideally, professionals will support patients in their long-term hearing loss journey, facilitating transitions when appropriate and a comfortable sense of the range of ways hearing loss can be addressed throughout one’s hearing journey.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.