AudiologyOnline Phone: 800-753-2160


Signia Active IX - December 2024

Dynamic Soundscape Processing: Research Supporting Patient Benefit

Dynamic Soundscape Processing: Research Supporting Patient Benefit
Matthias Froehlich, PhD, Katja Freels, Dipl.Ing., Eric Branda, AuD, PhD
December 16, 2019
Share:
This article is sponsored by Signia.

Learning Outcomes

After this course learners will be able to:

  • Describe the soundscape processing of Signia Xperience.
  • Describe the findings obtained for Xperience in laboratory studies.
  • Describe the findings obtained for Xperience for real-world EMA study.

Introduction

Signal processing in hearing aids has progressed to the point that for some listening-in-noise conditions, speech understanding for individuals with hearing loss is equal to or better than their peers with normal hearing (Froehlich, Freels & Powers, 2015).  One area, however, where improvement is still possible relates to the listener’s intent—what is the desired acoustic focus.  At a noisy party, for example, we may want to focus our attention on a person in a different conversation group, to “listen-in” on what he or she is saying.  While driving a car, we might divert our attention from the music playing to focus on a talker in the back seat.  Our listening intentions are often different in quiet vs. noise, when we are outside vs. in our homes, or when we are moving vs. when we are still.  Can we design hearing aid technology to automatically achieve the best possible match between the listener’s intentions and the hearing aid’s processing?  A 100% match is probably not possible, but improvements continue to be made.

Going back to the early 2000s, Siemens/Signia has been the industry leader in developing automatic features that attempt to follow the listener’s intent, resulting in improved speech understanding:

  • Directional technology that automatically switches between omnidirectional and directional processing based on the listening condition (Powers & Hamacher, 2002).  
  • e2e wireless technology—the first wireless system that allowed synchronous steering of both hearing instruments in a bilateral fitting (for review of e2e history, see Herbig, Barthel & Branda, 2014).
  • Development of automatic adaptive polar patterns allowing the null of the pattern to align with a single noise source, or to track a moving noise source (Ricketts, Hornsby & Johnson, 2005).  
  • Based on the location of the primary speech signal, automatic directional focus to the back and to the sides (Mueller, Weber & Bellanova, 2011; Chalupper, Wu & Weber, 2011) 
  • Narrow Directionality using bilateral beamforming for the most demanding speech-in-noise listening situations (Herbig & Froehlich, 2015).  

All of these features were developed to match the hearing aid user’s probable intent for a given listening situation—in most cases, an emphasis on the desired speech signal, or reduction of unwanted background noise.  So what is left to do?

New Acoustic and Motion Signal Processing

One area of interest centers on improving the importance functions given to speech and other environment sounds that originate from azimuths other than the front of the user. In general, an effort to improve identification and interpretation of the acoustic scene. To address this issue, an enhanced signal processing system recently was developed for the new Signia Xperience hearing aids.  This advanced analysis considers such factors as: 

  • The overall noise floor
  • Distance estimates for speech, noise and environmental sounds
  • The calculated signal-to-noise ratios
  • Estimates of the azimuth of the primary speech signal
  • Determination of ambient modulations in the acoustic soundscape.

A second addition to the processing of the Xperience product, again to hopefully correctly support the intent of the hearing aid user, was to include motion sensors to assist in the signal classification process, leading to a combined classification system named “Acoustic-Motion Sensors”.  In nearly all cases, when we are moving, our listening intentions are different than when we are still—we have an increased interest in what is all around rather than a specific focus on a sound source.  Using these motion sensors, the processing of Xperience is effectively adapted when movement is detected.

To evaluate the patient benefit of these new processing features, three research studies were conducted.  One to evaluate the efficacy of the algorithms in laboratory testing, a second to ensure that the benefit of Narrow Directionality is maintained when the new processing is active, and a third to determine the real-world effectiveness using ecological momentary assessment (EMA).

Laboratory Assessment of Acoustic-Motion Sensors

The laboratory study of the Dynamic Soundscape Processing was reported by Froehlich, Branda & Freels (2019).  The participants were fitted bilaterally with two different sets of Signia Pure RIC hearing aids, which were identical except that one set had the new Acoustic-Motion Sensor-based acoustic scene classification algorithm.  The participants were tested in two different listening situations. 

Scenario #1 (Restaurant Condition). This scenario was designed to simulate the situation when a hearing aid user is engaged in a conversation with a person directly in front, and unexpectedly, a second talker, who is outside the field of vision, enters the conversation.  This is something that might be experienced at a restaurant when a server approaches.  The target conversational speech was presented from 0° degree azimuth (female talker; 68 dBA) and the background cafeteria noise (64 dBA) was presented from four speakers surrounding the listener (45°, 135°, 225° and 315°).  The male talker from the side (68 dBA) was presented at random intervals, originating from a speaker at 110°. The participants were tested with the two sets of instruments (i.e., new processing “on” vs. new processing “off”).  After each series of speech signals from the speaker from the side, the participants conducted a rating. 

Ratings were conducted on 13-point scales ranging from 1=Strongly Disagree to 7=Strongly Agree, in half-point steps.  The ratings were based on two statements related to different dimensions of listening:  Speech understanding—“I understood the speaker(s) from the side well.”—and listening effort—“It was easy to understand the speaker(s) from the side.”  Speech understanding from the front also was rated.

Scenario #2 (Traffic Condition). This scenario was designed to simulate the situation when a person is walking on a sidewalk on a busy street with traffic noise (65 dBA), with a conversation partner on each side.  The azimuths of the traffic noise speakers were the same as the cafeteria noise for Scenario #1, and for this testing, the participants were tested with the motion sensor either on or off (although the participant was seated, the motion sensor was activated to respond as if the participant were moving for the test condition).  The participant faced the 0° speaker, with the speech from the conversational partners coming from 110° (male talker) and 250° (female talker) at 68 dBA.  The rating statements and response scales were the same as used for Scenario #1.

Results

Froehlich et al (2019) report the following results: For the restaurant listening scenario, there was a significant advantage (p<.05) for the talker to the side for “new processing on” for both speech understanding and for ease of listening.  See Figure 1 for mean data.  As expected, the primary talker from the front was rated equally for “on” versus “off” (p>.05).

Mean ratings for both speech understanding and listening effort for the speaker from the side in the restaurant condition

Figure 1. Shown are the mean ratings for both speech understanding and listening effort for the speaker from the side in the restaurant condition.  The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree, with mid-points included.  The participant (surrounded by cafeteria noise; 64 dBA), while listing to a conversation originating from 0°, rated a talker that randomly spoke from 110° (SNR= +4 dB).  The asterisk indicates significance at p<.05. Adapted from Froehlich et al, 2019).

The mean results for the traffic scenario are shown in Figure 2.  Recall that in this case, the participant was surrounded by traffic noise and had conversation partners on either side (110° and 250°, SNR = +3dB). And again, speech understanding and listening effort were rated.  When the new signal processing strategies were implemented, performance was significantly better (p<.05) for both speech understanding and listening effort.

Mean ratings for both speech understanding and listening effort for the traffic condition

Figure 2. Shown are the mean ratings for both speech understanding and listening effort for the traffic condition.  The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree, with mid-points included.  The participant, surrounded by background traffic noise (65 dB SPL), provided rating for talkers randomly originating from either side (110° and 250°, SNR=+3dB).  The asterisk indicates significance at p<.05. (Adapted from Froehlich et al, 2019).

Maintaining Narrow Directionality Benefit 

While the significant findings shown in Figures 1 and 2 are encouraging, it is reasonable to question if this new enhancement of speech from the sides could be detrimental to the effectiveness of the instrument’s binaural beamforming Narrow Directionality.  That is, the classification has to be precise enough that unwanted signals from the sides are amplified as little as possible when the listening intent involves a narrow directional focus to the front.

Since its introduction with the binax platform in 2014, Signia’s Narrow Directionality has been the industry leader in optimizing speech understanding in background noise.  For example, research using speech recognition in background noise revealed that a competitive new technology – which its manufacturer claimed “exceeds and supplants traditional directionality and noise reduction protocols” – fell significantly below the performance obtained with Signia instruments utilizing Narrow Directionality (Littmann & Høydal, 2017). Similarly, during this past year, two major manufacturers have introduced new hearing aid models with new technology, reporting that the new technology has improved processing for speech recognition in background noise.  However, the Signia Nx product with Narrow Directionality was significantly better than both these competitors in three different noise conditions: traffic noise, cafeteria noise and competing talkers, using speech-in-noise recognition testing (see Branda, Powers & Weber, 2019).  

As mentioned earlier, it is important that the Signia Xperience platform maintains the high level of speech-in-noise performance previously demonstrated with the Signia Nx.  A study, therefore, was designed to determine if indeed this was the case (Powers, Weber & Branda, 2019).

The hearing instruments used in this study were the premier RIC models of Signia, Xperience Pure and Nx Pure. The hearing aids were fitted with closed-fitting double-dome eartips. For each participant, instruments were programmed to the NAL-NL2 prescriptive method (experienced user, bilateral fitting), verified with probe-microphone measurements and adjusted to be within +/-5 dB of prescriptive targets from 500 to 4000 Hz.  Using the Signia app, the directivity of both products was set to the most pronounced directivity to the front.

The test conditions replicated the work of Branda et al (2019). The array for the presentation of the target and competing speech material consisted of 8 loudspeakers surrounding the participant, equally spaced at 45° increments, starting at 0° (i.e., 45°, 90°, 135°, etc). The participant was seated 1.5 meters from all loud speakers in the center of the room, directly facing the target speech signal at 0°. 

The target speech material for the speech recognition measures was the sentences of the American English Matrix text (AEMT).  Three different background noises were used: traffic, cafeteria and competing talkers (uncorrelated sentences of the AEMT). Each participant was tested consecutively with the same product for the three different noise conditions. 

The resulting individual SNRs (referenced to SRT-50) for each condition were used for analysis. Statistical analysis showed no significant main effect of hearing aid (Nx vs. Xperience) and no interaction between noise type and hearing aid. In other words, there was no evidence of a difference between the Narrow Directionality feature when used in either the Signia Nx or Xperience hearing aids. The mean findings are shown in Figure 3.

Mean SNRs obtained from the AEMT for the Signia Xperience and the Signia Nx for 50% intelligibility

Figure 3. Shown are the mean SNRs obtained from the AEMT for the Signia Xperience and the Signia Nx for 50% intelligibility. Results shown for three different noise conditions (error bars indicate 95th confidence intervals). (Adapted from Powers et al, 2019).

The purpose of the Powers et al (2019) research was to ensure that the Narrow Directionality feature of the Xperience was equal to that of the Nx model, which indeed was the finding. Recall that previous research had shown that the speech-in-noise processing of the Nx was superior to that of premier models from competitive manufacturers.  Given the findings of “no difference” between Xperience and Nx Narrow Directionality, this allows us to reasonably assume that even with the new acoustic sensors active, Xperience also is superior to the competition for these important listening conditions (see Powers et al, 2019, for a complete report of this research).

Real-World Effectiveness

While the positive findings from the laboratory data for the new types of processing were encouraging, it was important to determine if these patient benefits extend to real-world hearing aid use.  Therefore, a third study (reported by Froehlich et al, 2019), was conducted with the Xperience product involving a home trial.

The 35 participants in the study all had bilateral symmetrical downward-sloping hearing losses and were experienced users of hearing aids. They were fitted bilaterally with Signia Xperience Pure 312 7X RIC instruments, with vented ear couplings.  The participants rated their hearing aid experience during the one-week field trial using ecological momentary assessment (EMA).That is, during or immediately after a real-world listening experience, ratings for that experience were conducted.  The EMA app linked the participants’ smart phone to the Signia hearing aids and logged responses during the time that participants were answering a questionnaire.  The primary EMA questions covered seven different listening environments, the actions of the user (still or moving), and the user’s perceptions for the situation.  The participants were trained on using the app prior to the home trial.

EMA has been used in psychology research since the 1940s, but it has only been during the last decade that it has been employed commonly in hearing aid research (see Wu, 2017 for review).  A traditional questionnaire, often completed days or weeks after a specific listening situation, often can be biased.  EMA aims to minimize recall bias and memory distortions, maximize ecological validity, and also allow for the study of other factors that influence behavior in real-world contexts (e.g., the SNR of a given listening situation can be measured and stored at the same time that the participant is rating satisfaction for speech understanding). Retrospective self-reports also suffer from poor contextual resolution—what is considered “background noise” by one person might not be background noise for another. It is believed that EMA captures more accurate self-reports by asking people about their experiences closer to the time and the context in which they occur. EMA also allows for more frequent sampling, which increases the validity of the rating for a given type of situation, and provides data for a higher number of situations. It provides insights into communication events that may happen less frequently and would be hard to replicate in the lab.

For the present study, EMA was conducted for the following listening situations: At home with background noise present, in buildings with background noise, outside with background noise, standing on a busy street, walking on a busy street, in public transportation, and outside in quiet (yard, park, etc).  Through EMA, the following perceptions were evaluated for the above experiences:  Loudness, sound quality, speech understanding, listening effort, naturalness, direction of a sound, distance of a sound, and overall satisfaction.

Results

For the analyses, EMA queries that were only started, or not fully completed were eliminated, resulting in 1,938 EMAs used for the findings reported here (average of 55/participant for the week-long trial).  As discussed earlier, one of the primary benefits of Xperience is derived from the motion sensor that is integrated into the hearing aids. To evaluate the effectiveness of this feature, EMAs were examined for three different speech understanding in background noise conditions, when the participants reported that they were moving:  noise in the home (136 EMAs), noise inside a building (153 EMAs), and noise outside (31 EMAs).  The participants rated their ability to understand in these situations on a 9-point scale, ranging from 1=Nothing, 5=Sufficient to 9=Everything.  We could assume that even a rating of #5 (Sufficient) would be adequate for following a conversation, but for the values shown in Figure 4, we combined the ratings of #6 (Rather Much) and higher.  As would be expected, the understanding ratings for in the home were the highest, but for all three of these difficult listening situations—understanding speech in background noise while moving—overall understanding was good.  The highest rating of “Understand Everything” on the 9-point scale was given for 60% of the EMAs for home, 62% for inside a building, and 39% for outside.

Combined understanding ratings of #6 or higher for the EMA questions related to understanding speech in background noise while moving

Figure 4. Shown are the combined understanding ratings of #6 (Rather Much) or higher (9-point scale) for the EMA questions related to understanding speech in background noise while moving.  Results shown for in the home (136 EMAs), in a building (153 EMAs), and when outside (31 EMAs). (Adapted from Froehlich et al, 2019).

A challenging listening situation that occurs while moving is having a conversation while walking down a busy street.  For this condition, three EMA questions were central: Is the listening impression natural? Is the acoustic orientation appropriate?  What is the overall satisfaction for speech understanding?  The first two of these were rated on a four-point scale: Yes, Rather Yes, Rather No, and No.  Satisfaction for speech understanding was rated on a 7-point scale similar to that used in MarkeTrak surveys: 1= Very Dissatisfied to 7=Very Satisfied.  

The results for these three questions for the walking on a busy street with background noise condition are shown in Figure 5.  Percentages are either percent of Yes/Mostly Yes answers, or percent of EMAs showing satisfaction (a rating of #5 or higher on the 7-point scale).  As shown, in all cases, the ratings were very positive. Perhaps most notable was that 88% of the EMAs reported satisfaction for speech understanding for this difficult listening situation.

Percent of Yes and Mostly Yes answers, or percent of EMAs reporting satisfaction

Figure 5. Shown are the percentages representing either the percent of Yes/Mostly Yes answers, or percent of EMAs reporting satisfaction (a rating of #5 or higher on the 7-point scale).  The number of EMAs used for the analysis were 80 for Natural Impression, 63 for Acoustic Orientation, and 79 for Overall Satisfaction. (Adapted from Froehlich et al, 2019).

As discussed earlier, in addition to the motion sensors, part of Dynamic Soundscape Processing is a new signal identification and classification system, with the primary goal of improving speech understanding from varying azimuths together with ambient awareness.  Several of the EMA questions were geared to these types of listening experiences. 

 The participants rated satisfaction on a 7-point scale, the same as commonly has been used for EuroTrak and MarkeTrak.  If we take the most difficult listening situation—understanding speech in background noise—the EMA data revealed satisfaction of 92% for Xperience (this was the average satisfaction rating for all conditions when the participant stated that background noise was present).  We can compare this to other large-scale studies.  Part of the EuroTrak satisfaction survey has the hearing aid user rate satisfaction for a variety of different listening conditions, one of which is “Use in Noisy Situations.”  The data for this listening category differs somewhat from country to country, but in all cases, falls significantly below Xperience. A sampling from five countries compared to our data is shown in Figure 6.  These all are EuroTrak surveys from either 2018 or 2019.

Satisfaction ratings for use in noisy situations for Xperience participants compared to the EuroTrak findings

Figure 6. Shown are the satisfaction ratings (in percent) for use in noisy situations for Xperience participants compared to the EuroTrak findings for five sample countries (surveys from either 2018 or 2019).  Satisfaction ratings shown are the combined values of Somewhat Satisfied, Satisfied, and Very Satisfied.

The findings of MarkeTrak10 recently became available, and it is therefore possible to also compare the EMA results with Xperience to these survey findings.  This in fact might be more relevant, as the MarkeTrak10 data that we used for comparison here were from individuals who were using hearing aids that were only 1 year old or newer—the EuroTrak data included all users, many of which had older hearing aids. 

While the EMA questions were not worded exactly like the questions on the MarkeTrak10 survey, they were similar and therefore provide a meaningful comparison.  Shown in Figure 7 are the percent of satisfaction (combined ratings for Somewhat Satisfied, Satisfied, and Very Satisfied) for overall satisfaction and for three different common listening situations.  We did not have EMA questions differentiating small groups from large groups, but MarkeTrak10 does.  The MarkeTrak10 findings were 83% satisfaction for small groups and 77% for large groups.  What is shown for MarkeTrak for this listening situation in Figure 7 is 80%, the average of the two group findings.  

In general, satisfaction ratings for Xperience were very high, and exceeded those from MarkeTrak10, even when comparing to the rather strong baseline for hearing aids that were less than 1 year old and even though most of the EMA questions were answered in situations with noise.

Percent satisfaction for the Xperience EMAs compared to MarkeTrak10 findings for three different listening situations

Figure 7. Shown is the percent satisfaction for the Xperience EMAs, compared to MarkeTrak10 findings, for three different listening situations and for overall satisfaction. Overall satisfaction=1938 EMAs,  Satisfaction in one-to-one=564 EMAs, group conversations=151 EMAs and conversations in noise=598 EMAs. (Adapted from Froehlich et al, 2019). 

Summary and Conclusions

There is a continued effort to design hearing aid technology that “thinks” the way the listener thinks, and automatically adjusts processing accordingly.  The Signia Xperience provides an advantage in this important area.  The processing of this instrument more effectively addresses speech from other directions from the front, while maintaining all the advantages of the Signia’s Narrow Directionality. It also provides enhanced ambient awareness and adjusts when the hearing aid user is moving. As reported here, when participants used these instruments, laboratory data revealed significantly better speech understanding for speech from the sides, both when stationary and when moving.  Speech-in-noise recognition research showed that the new Dynamic Soundscape Processing did not reduce the effectiveness of Narrow Directionality when the desired speech signal was from the front of the listener. And finally, real-world data using EMA methodology revealed highly satisfactory environmental awareness, and higher overall user satisfaction ratings than have been obtained for either EuroTrak, or the recent MarkeTrak10 survey.  

References

Branda E, Powers T, Weber J. (2019) Clinical Comparison of Premier Hearing Aids. Canadian Audiologist. 6(4).

Chalupper J, Wu Y, Weber J. (2011) New algorithm automatically adjusts directional system for special situations. Hearing Journal. 64(1): 26-33.

Froehlich, M., Freels, K., & Powers, T. (2015) Speech recognition benefit obtained from binaural beamforming hearing aids: comparison to omnidirectional and individuals with normal hearing AudiologyOnline, Article 14338. Retrieved from https://www.audiologyonline.com

Froehlich M, Branda E, & Freels K. (2019) Research Evidence for Dynamic Soundscape Processing Benefits.  Hearing Review. 26(9). 

Herbig R, Barthel R, Branda E. (2014) A history of e2e wireless technology. Hearing Review. 21(2): 34-37.

Herbig, R, Froehlich, M. (2015) Binaural Beamforming: The Natural Evolution. Hearing Review. 22(5):24.  

Littmann V, Høydal E. (2017) Comparison study of speech recognition using binaural beamforming narrow directionality. Hear Rev 24(5):34–37.

Mueller HG, Weber J, Bellanova M. (2011) Clinical evaluation of a new hearing aid anti-cardioid directivity pattern.  International Journal of Audiology. 50(4):249-54

Powers TA,  Hamacher, V. (2002) Three-microphone instrument is designed to extend benefits of directionality.  Hearing Journal. 55 (10): 38-45.

Powers TA, Weber J, Branda E.  Maintaining narrow directionality while improving soundscape processing. Canadian Audiologist. 6(6). 

Ricketts T, Hornsby BY, Johnson EE. (2005) Adaptive Directional Benefit in the Near Field: Competing Sound Angle and Level Effects. Seminars in Hearing 26(2): 59-69.

Wu, Y-H. (2017). EMA methodology - research findings and clinical potential. AudiologyOnline, Article 20193.  Retrieved from www.audiologyonline.com

Citation

Froehlich, M., Freels, K. & Branda, E. (2019). Dynamic soundscape processing: research supporting patient benefit. AudiologyOnline, Article 26217. Retrieved from https://www.audiologyonline.com

Industry Innovations Summit Live CE Feb. 1-28

matthias froehlich

Matthias Froehlich, PhD

Head of Global Marketing Audiology, Sivantos

Dr. Matthias Froehlich is head of Audiology Strategy for Sivantos in Erlangen Germany. He is responsible for the definition and validation of the audiological benefit of new hearing instrument platforms. Dr. Froehlich joined Sivantos (then Siemens Audiology Group) in 2002, holding various positions in R&D, Product Management, and Marketing since then. He received his Ph.D. in Physics from Goettingen University, Germany.


katja freels

Katja Freels, Dipl.Ing.

R&D Audiologist, Sivantos GmbH

Ms. Freels has been a research and development audiologist at Sivantos in Erlangen Germany since 2008.  Her main responsibilities include the coordination of clinical studies and research projects.  Prior to joining Sivantos (then, Siemens Audiology), Ms. Freels worked as a dispensing audiologist.  She studied Hearing Technology and Audiology at the University of Applied Sciences in Oldenburg, Germany.  


eric branda

Eric Branda, AuD, PhD

Dr. Eric Branda is an Audiologist and Director of Applied Audiological Research for WS Audiology in the USA. For over 25 years, Eric has been involved in audiological, technical and research initiatives around the globe. He specializes in investigations on new product innovations, as well as with research partners, helping WSA fulfill its goal of creating advanced hearing solutions for all types and degrees of hearing loss. Dr. Branda received his PhD from Salus University, his AuD from the Arizona School of Health Sciences and his Master’s degree in Audiology from the University of Akron.



Related Courses

User Engagement with Signia TeleCare: A Way to Facilitate Hearing Aid Acceptance
Presented by Matthias Froehlich, PhD, Eric Branda, AuD, PhD, Dirk Apel, BS
Text/Transcript
Course: #34281Level: Intermediate1 Hour
In this paper, we report on an investigation regarding the relationship between the level of TeleCare user engagement, measured by the number of interactions with some of the key features during the first weeks of hearing aid use, and the subsequent decision made by the wearer on whether to keep the hearing aids following their initial purchase decision.

Wireless Technology in Hearing Aids
Presented by Eric Branda, AuD, PhD
Live WebinarFri, Feb 28, 2025 at 3:00 pm EST
Course: #40196Level: Intermediate0.5 Hours
The use of wireless technologies in hearing aids as increased over the years. This course will examine types of wireless technology commonly used in hearing aids.

Hearing Aid Signal Classification Systems: How They Work & How They Differ
Presented by Brian Taylor, AuD, Eric Branda, AuD, PhD
Audio
Course: #38164Level: Intermediate0.5 Hours
Modern hearing aids are designed to compensate for the innate ability of individuals to differentiate between rapidly changing soundscapes. To improve performance across a wide range of soundscapes, hearing aids employ multiple programs, each geared to optimize performance in a different soundscape. Each program has a group of features tuned to optimize processing within each program. Signal classifiers allow for these programs to switch automatically without direct wearer interaction with the hearing aids, however some classification systems are more sophisticated than others, as this podcast addresses.

Technical Details of the IX Platform: Signia’s Approach to Signal Processing in Demanding Listening Environments
Presented by Eric Branda, AuD, PhD, Brian Taylor, AuD
Audio
Course: #39461Level: Advanced1 Hour
The objective of this podcast course is for clinicians to obtain a deeper understanding of Signia’s implementation of split processing and how it differs from competitors’ approaches. A primary focus of the material will be Signia’s implementation of region beamforming, motion sensors, and processed-based noise reduction in the IX platform.

Signia Podcast Series: Beamforming Technology - It’s Not All Created Equal!
Presented by Eric Branda, AuD, PhD, Lisa Klop, AuD
Audio
Course: #35485Level: Intermediate0.5 Hours
This podcast will discuss how Signia hearing aids use beamforming to achieve directionality. Signia's Motion Sensor technology will also be discussed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.