Question
When a patient complains that the hearing aid is "too loud," and you are fairly sure it is not for soft sounds, how much should the compression be increased and how much should the MPO be decreased? Both adjustments appear to have disadvantages; increasing compression doesn't actually change the MPO and could cause difficulty with speech cue understanding, whereas chopping the top off the hearing aid output by reducing the MPO causes a greater amount of inputs to have the same output, implying a distorted subjective impression.
Answer
Great question and I'm betting that it's one that clinicians face almost daily, now that nearly all hearings allow for the adjustment of both AGCi/WDRC and AGCo (MPO). At some point we need to make the right adjustment, as we know that if the output for loud sounds isn't "right," this often prompts users to avoid certain listening situations, use less-than-optimum gain, or maybe not use their hearing aids at all.
You mention that "changing compression doesn't actually change the MPO." I assume you mean the WDRC (either lowering the kneepoint or increasing the ratio), and your statement is only partly true. In many cases, if the patient does not have a VC control, the WDRC more or less acts as compression limiting and indirectly controls the MPO. That is, if it is set aggressive enough, it's likely that most everyday loud inputs would not receive enough gain to reach the AGCo kneepoint. In general of course, we'd like to use WDRC to repackage speech, and use the AGCo for compression limiting, to maintain the maximum output below the patient's LDLs. You say his complaint is "too loud," but I'm not sure if that means "louder than he'd like to listen to on a regular basis", or "so loud that loud sounds are uncomfortable." Knowing this would also help determine if you adjust the WDRC versus the AGCo.
Regarding WDRC adjustments, I should acknowledge that with most software today, you probably don't directly adjust ratios or kneepoints. Rather, in the case you presented, you probably would click on a control that says something like "make loud sounds softer." What then happens behind the scenes differs somewhat from manufacturer to manufacturer, but if you reduce gain for loud, without reducing gain for soft, it's a good bet that you just made the compression ratio larger (e.g., 3:1 versus 2:1). Some manufacturers use more than one kneepoint within the range of speech inputs, so in these cases it could involve kneepoint manipulation too. For a good time over lunch one day, hook your favorite hearing aid up to a coupler, make a few compression adjustments using the generic buttons, run coupler curves, and see what is really happening in the background.
Okay—back to your question. You do mention that you're fairly sure that loudness perceptions for soft inputs are okay. This is useful information as it pretty much eliminates one possible alternative adjustment—turning down overall gain, which would then make soft sounds too soft, and create a new problem. I'm going to answer this as if the patient doesn't have a VC or remote device to change gain himself (if he did, he probably wouldn't have this complaint).
The first step in solving this problem would be to take a close look at your probe-mic speech mapping results for the higher input levels. Granted, your prescriptive targets, such as those of the NAL-NL1 are just a starting point, but I certainly would be more inclined to make the WDRC ratios more aggressive if the output was 5 dB over target, than if it were 5 dB below target. You asked "how much" compression should be changed—enough to obtain a reasonable target match would be my initial goal.
I'd also conduct an RESR, which would then give you a good idea of how the maximum earcanal output compares to the patient's frequency-specific LDLs you obtained earlier under earphones (which have now been plotted in earcanal SPL on the fitting screen). If the hearing aid's output exceeds the RESR targets, that would be a clear indication to lower the AGCo kneepoints for that region (many of today's products have multichannel AGCo, which makes this process more precise than in years past). If the RESR falls comfortably below the patient's LDLs, and you're reasonably confident you have valid LDLs, then I wouldn't mess with the AGCo kneepoints . . . yet. It's also okay to ask the patient if the signal was uncomfortably loud during the RESR testing—just make sure you distinguish between "Loud, But Okay" versus "Uncomfortably Loud".
I said your probe-mic testing was the first step, as you also may want to conduct some aided loudness verification using a broadband signal such as speech. This would help account for issues such as loudness summation and bilateral summation (I'm assuming it's a bilateral fitting);factors that could be influencing his judgments. I like the loudness verification protocol of the IHAFF, which uses Cox's 7-Point Loudness Anchors as a reference. Again, you asked "how much" compression should be changed? Well, if the patient clearly ranks the loud input signal a #7, then you would need to increase compression enough to obtain a #6 rating. It could be, however, that you find that his loudness judgments are very appropriate, and the entire problem may be more a counseling issue rather than a need for compression adjustment.
So like most things related to hearing aid fittings, there is no simple answer. To get it right, you have to think about how WDRC works, how AGCo works, how your favorite manufacturer's software works, overall gain settings, the patient's LDLs, your prescriptive targets, the speech mapping results, the RESR results and the aided loudness judgments. But hey, that's all part of what defines our profession!
H. Gustav Mueller, Ph.D. is Professor of Audiology at Vanderbilt University in Nashville, TN, with a consulting practice nestled in Bismarck, ND. He is a consultant for Siemens Hearing Instruments, Contributing Editor of The Hearing Journal, and GoVandy@GusMueller.net Contact: earGuy@www.earTunes.com