AudiologyOnline Phone: 800-753-2160


MED-EL - Implant Experience - August 2023

Audiology and Evidence-based Health Care

Audiology and Evidence-based Health Care
Kyle C. Dennis, PhD, CCC-A, FAAA
September 18, 2000
Share:

A. Why should we measure outcomes?

Health care decision making is shifting from cost-driven choices to quality-driven choices. This paradigm shift refocuses the emphasis from 'what does it cost?' to 'what do I get for my money?'
Accrediting agencies such as the Joint Commission of the Accreditation of Health Care Organizations (JCAHO), Council on Accreditation of Rehabilitation Facilities (CARF), and the National Council on Quality Assurance (NCQA) have shifted to outcome-based standards as part of the accreditation reviews.

Outcome measures provide a wealth of information for patients, stakeholders, third-party payers, and clinicians including:



  • Objective information on best practices, treatment options, and treatment efficacy

  • Data-driven decision making and data-driven problem solving

  • Demonstration of and maximization of value

  • Demonstration of benefit of the recommended treatment (or course of action)

  • Quality assurance and total quality improvement

  • Accountability

  • Validation of treatment efficacy

  • Marketing of clinical services

  • Research into treatment alternatives, treatment efficacy, benefit, and best practices

  • Performance benchmarks

  • Best practices (clinical practice guidelines)


  • B. Standard Definitions (WHO)

    The World Health Organization (WHO), in their ICIDH-2—International Classification of Impairments, Disabilities, and Handicaps offers the following definitions:
  • Impairment.
  • Impairment affects functioning at the level of the body and is defined as 'a loss or abnormality of body structure or physiological or psychological function.' For example: hearing loss.

  • Activity Limitation
  • . Activity limitation defines 'the nature or extent of functioning at the level of the person'. For example: difficulty hearing speech in background noise.

  • Participation Restriction.
  • Participation restriction is defined as 'the nature or extent of a person's involvement in life situations in relation to impairment, activities, health conditions, and contextual factors'. For example: limitation in participation in one's own health care.

  • Health-related Quality of Life.
  • Health-related quality of life is defined as 'the functional effect of an illness and its consequent therapy upon the patient.' For example: the impact of hearing loss on family.

  • Satisfaction
  • . Satisfaction is the subjective assessment by the customer or patient that his/her needs or expectations have been met. It doesn't matter how the good the care or how effective the treatment was if the patient was not satisfied with the outcome.


  • C. Outcome Measures

    A good outcome measure addresses relevant outcome domains, applies and satisfies psychometric standards of reliability (norms), has low respondent burden (i.e. easy to complete, appropriate to reading level) for the patient/customer or the clinician, and is feasible and clinically useful.

    There are a variety of outcome measures used by audiologists. An outcome measure may be address one or more outcome domains: impairment, activity, participation, satisfaction, and health-related quality of life. Some outcome measures such as pure tone thresholds, insertion gain, and audibility index (AI) are used every day and provide objective evidence of patient status. These are outcome measures in the impairment domain.

    Speech recognition scores (W-22, NU-6, SPIN, HINT, etc.), the Abbreviated Profile of Hearing Aid Benefit (APHAB; Cox and Alexander, 1995), Client Oriented Scale of Improvement (COSI; Dillon et al. 1997), and the Glasgow Hearing Aid Benefit Profile (Gatehouse, 1999) are examples of outcome measures in the activity domain. The APHAB, for example, asks the patient to rate the frequency he/she has problems in a specific situation: 'I have difficulty hearing a conversation when I'm with one of my family at home.'

    The Hearing Handicap Inventory for the Elderly (HHIE; Ventry and Weinstein, 1982), the Hearing Handicap Inventory for Adults (HHIA; Newman et al. 1991), the Abbreviated Profile of Hearing Aid Benefit (APHAB; Cox and Alexander, 1995), the Client Oriented Scale of Improvement (COSI; Dillon et al. 1997), and the Glasgow Hearing Aid Benefit Profile (Gatehouse, 1999) are outcome measures in the participation domain. For example, the HHIE asks the question 'Does a hearing problem cause you to avoid groups of people?'

    The Satisfaction with Amplification in Daily Life (SADL) and the ASHA Consumer Satisfaction Measure are examples of outcome measures in the satisfaction domain. For example, the SADL asks the question 'Does wearing your hearing aid(s) improve your self-confidence?'

    The HHIE, the Communication Profile for the Hearing Impaired (CPHI; Demorest and Erdman, 1986), the Sickness Impact Profile (SIP; Bergner et al. 1981), the MOS-36 Short Form Health Survey (Ware and Sherbourne, 1992), and the Health Utilities Index (HUI; Feeney et al. 1995) are examples of outcomes measures in the health-related quality of life domain. For example, the SF-36 asks the question 'Compared to a year ago, how would you rate your health in general now?'

    There are also a number of outcome measures that address the economic aspects of clinical treatment choices. Cost analysis simply measures the cost of treatment (e.g. labor, equipment, supplies, space, utilities, depreciation, overhead). It does not measure 'benefit'. Cost-benefit analysis compares dollars spent against dollars gained or saved by a treatment option. Dollar values are assigned to both the cost of treatment (cost analysis) and the costs saved, or avoided, by the treatment. For example, improved quality of life, reduced family strife, and improved employability are economic benefits to the patient. Willingness to pay analysis is a special category of cost-benefit analysis. Willingness to pay analysis obtains data on the amount individuals are willing to pay for treatment (with or without benefit). Cost effectiveness measures the cost per unit of outcome. For example, how much it cost for each percent change on the APHAB? Cost-utility analysis relates cost to changes in quality of life. One cost-utility measure is the cost per quality-adjusted life year. This measure compares cost against benefit calculated over a patient's life expectancy.

    D. Clinical Practice Guidelines

    Clinical practice guidelines or algorithms are usually displayed in the form of flow charts that show the stepwise processes of clinical decision making. An essential ingredient to evidence-based health care is the measurement of clinical outcomes using rigorous and scientifically-defensible methods.

    Generally speaking, a clinical algorithm consists of a series of 'action' boxes, assessment steps, and decision points. The clinical practice guideline offers recommendations for performance or exclusion of specific procedures or services through a rigorous methodological approach, appropriate criteria (effectiveness, efficacy, population benefit, or satisfaction) derived from a critical literature review (levels of evidence), and clinical decisions and choices dictated by scientific or clinical evidence. Clinical practice guidelines are not cookbooks. They allow for reasonable clinical judgement and departure from guidelines when justified. Clinical practice guidelines are not clinical pathways or care maps. Unlike clinical practice guidelines, clinical pathways, care maps, or clinical management plans organize, sequence, prioritize, and specify timing for major patient care activities for a particular diagnosis or procedure. Clinical pathways are not supported by clinical evidence, but may be derived from clinical practice guidelines.

    Clinical practice guidelines assure that appropriate care is provided, help to prevent errors, assure predictable, consistent health care delivery, assure consistent quality and resource utilization, assure predictable and consistent outcomes, assure accountability, provide a vehicle for education, establish realistic patient expectations, and stimulate research.

    A good clinical practice guideline has validity, strong supporting evidence, reliability, reproducibility, clinical applicability, clarity, multi-disciplinary consensus, scheduled review, and good documentation

    The Agency for Health Care Policy and Research developed guidelines for evaluation strength of evidence and recommendations for clinical practice guidelines:

    Level I—usually indicated, always acceptable and considered to be useful and effective
    Research required—large, randomized controlled clinical trials with clear-cut results and low risk of error (Grade A)

    Level IIa—acceptable, uncertain efficacy and may be controversial, weight of evidence favors usefulness and effectiveness
    Research required—small randomized trials with uncertain results and moderate to high risk, well-designed clinical studies (Grade B)

    Level IIb—acceptable, uncertain efficacy and may be controversial, not well established by evidence, may be helpful and probably not harmful
    Research required—uncontrolled studies, case studies, accepted practice, expert opinion (Grade C)

    Level III—not acceptable, of uncertain efficacy and may be harmful. Does not appear in guidelines.

    Some algorithms use different levels of evidence. For example, the recently released document by Joint Committee on Clinical Practice Algorithms and Statements (Audiology Today, Special Issue 2000) used a modified AHCPR hierarchy: Grade I (Level I), Grade II (Level IIa), and Grade III (Level IIb).

    F. How Do I Read a Clinical Practice Algorithm?

    Clinical practice guidelines are usually displayed in the form of an algorithm in which the shape of the boxes indicates specific actions. Ovals represent clinical states or inputs to the clinical process. Usually, a patient presents with a specific complaint or is referred from another provider for evaluation. By convention, clinical state boxes are not numbered.

    Rectangles are action or 'do' boxes. The recommended action is described inside the box. Action boxes are numbered and always have references to supporting evidence. In the Joint Committee document, two types of references were cited: core evidence (codes A-0) and supporting documents or evidence (numeric reference numbers).

    Diamonds represent decision points. Clinical decisions or assessment outcomes always lead to two possible outcomes designated 'yes' and 'no'. Decision boxes are numbered and always have references to supporting literature. In the Joint Committee document, two types of references were cited: core evidence (codes A-0) and supporting documents or evidence (numeric reference numbers).

    Algorithms also have various types of 'nodes'. Essential nodes are steps that are required to be in the process because of its importance and strength of association. For example, box 2 in Algorithm 2 (Audiologic Assessment) is an essential node. Controversial nodes describe steps that are controversial or subject to debate. Usually, the consensus or majority opinion is described in the box. Sometimes, the box presents choices to the user. The consensus opinion is always supported by clinical evidence. For example, box 8 in Algorithm 3 (Hearing Aid Selection and Fitting) is a controversial node. Protective nodes are not necessarily essential to the flow of the algorithm but they are included to ensure that the algorithm is applied correctly. For example, box 6 in Algorithm 3 (Hearing Aid Selection and Fitting) is a protective node. Algorithms may have supplemental comment boxes that refer to evidence or other algorithms.

    Kyle C. Dennis, Ph.D.
    Washington, D.C.

    Dr. Dennis is the National Deputy Director of the Audiology and Speech Pathology Service, Department of Veterans Affairs. Dr. Dennis was a consultant to the Joint Audiology Committee on Clinical Practice Algorithms and Statements. The opinions stated herein are solely those of the author and do not necessarily reflect positions or policies of the Department of Veterans Affairs.

    G. References:

    Abrams, H. and Hnath-Chisolm, T. (2000). 'Outcome measures: The audiologic difference'. In Valante, M., Hosford-Dunn, H., and Roesser, R. (Eds.): Audiology: Diagnosis, Treatment, and Practice Management, New York: Thieme.

    Abrams, H. (2000) 'Introduction to Amplification Outcome Measures' and 'Outcome Procedures and Data Interpretation', ASHA Telephone Seminar, July 28, 2000. American Speech-Language-Hearing Association.

    Abrams, H. (1999). 'Outcome Measures in Audiologic Practice', United States Department of Veterans Affairs.

    Beck, L. (2000). Personal communication.

    Berner, M., Bobbitt, R., Carter, W. and Gibson, B. (1981). The Sickness Impact Profile: Development and Final Revision of a Health Status Measure. Medical Care 14: 57-67.

    Clinical Algorithms Mini Course. L. Fishelman, West Virginia Medical Institute and Birch and Davis Associates, 1996.

    Clinical Practice Guidelines and Statements. Joint Audiology Committee on Clinical Practice Algorithms and Statements, Audiology Today, Special Issue 2000.

    Cox, R. and Alexander, G. (1995). The Abbreviated Profile of Hearing Aid Benefit. Ear and Hearing 16: 176-186.

    Demorest, M. and Erdman, S. (1987). Development of the Communication Profile for the Hearing Impaired. Journal of Speech and Hearing Disorders 52: 129-143.

    Dillon, H., James, A., and Ginis, J. (1997). Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. Journal of the American. Academy of Audiology 8:27-43.

    Feener, M., Torrence, G., and Furlong, W. (1996). Health Utility Index. In Spilker, B. (Ed.): Quality of Life and Pharmacoeconomics in Clinical Trials, 2nd Edition. Philadelphia: Lippicott-Raven, pp 239-252.

    Gatehouse, S. (1999). Glasgow Hearing Aid Benefit Profile: Deviation and validation of a client-centered outcome measure for hearing aid services. Journal of the American Academy of Audiology 10: 80-103.

    Newman, C., Weinstein, B., Jacobson, G., and Hug, G. (1991). Test-retest reliability of the Hearing Handicap Inventory for Adults. Ear and Hearing, 12: 355-357.

    Ventry, I and Weinstein, B. (1982). The Hearing Handicap Inventory for the Elderly: A new tool. Ear and Hearing 3: 128-134.

    Ware, J. and Sherbourne, C. (1992). The MOS 36-item short-form survey (SF-36) I: Conceptual framework and item selection. Medical Care 30: 473-481.

    VHA Directive 96-053 'Roles and Definitions for Clinical Practice Guidelines and Clinical Pathways, Veterans Health Administration, United States Department of Veterans Affairs.

    World Health Organization (1997) Towards a Common Language for Functioning and Disablement:
    ICIDH-2 Beta Draft for Field Trials. Geneva: World Health Organization.
Phonak Infinio - December 2024

Kyle C. Dennis, PhD, CCC-A, FAAA



Related Courses

Innovative Audiologic Care Delivery
Presented by Rachel Magann Faivre, AuD, Lori Zitelli, AuD, Heather Malyuk, AuD, Ben Thompson, AuD
Recorded Webinar
Course: #38661Level: Intermediate4 Hours
This four-course series highlights the next generation of audiology innovators and their pioneering approaches to meeting unmet audiologic needs in their communities and beyond. This peer-to-peer educational series highlights researchers, clinicians, and business owners and their pioneering ideas, care delivery models, and technologies which provide desperately needed niche services and audiologic care.

Influence of Provider Interaction on Patient Readiness
Presented by Amyn Amlani, PhD
Recorded Webinar
Course: #36154Level: Intermediate1 Hour
This course provides insights into the degree to which provider interaction influences patient readiness towards audiological services. Patient readiness is quantified through predisposed and post-appointment expectation profiles across 10 dimensions and among three professional settings. Overall, outcomes indicate that patient readiness is influenced by the behavior of the provider which, at present, appears to be a barrier to patient acceptance of treatment options.

Symphonia - the Software for Virtual Sound Environment Creation
Presented by Michele Buonocore, AuD
Recorded Webinar
Course: #37727Level: Introductory0.75 Hours
This webinar reviews the Symphonia system, which enables hearing care professionals to simulate real-world listening scenes so that patients can experience the benefits of hearing aids and advanced hearing aid features. Symphonia provides 360° directional sound sources and allows the professional to change the angle and distance from which sounds are perceived by the patient in real-time.

Current Topics for Audiologists in the VA Healthcare System, ​​in partnership with AVAA
Presented by Jessica Preston, AuD, Cyndi Trueheart, AuD, Delia Karahalios, AuD, Erica Dombrowsky, AuD, Steve Huart, AuD, Kathryn Nearing, PhD, MA, Stephanie Disney, MS, CCC-A
Recorded Webinar
Course: #38004Level: Intermediate4 Hours
This course features selected webinars from the series held in partnership with The Association of VA Audiologists (AVAA). Highlights include expansion of the telehealth program, aural rehabilitation and counseling, interprofessional collaboration, and how to implement an inpatient audiology program.

Hearing Loss and Aging: Considerations for the Clinician
Presented by Mitchell S. Sommers, PhD, Barbara Weinstein, PhD, Brian J. Taylor, AuD, Lori Zitelli, AuD
Recorded Webinar
Course: #32200Level: Intermediate4.5 Hours
This course addresses unique challenges for those who find themselves working with the aging population. An overview of dementia and the connections to hearing loss and hearing health care interventions will be discussed. Also, age-related changes in both sensory function and cognitive abilities and social disengagement and withdrawal as a result of listening fatigue will be reviewed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.