AudiologyOnline Phone: 800-753-2160


Audioscan Simulated REM - September 2021

How Sign Language Users Learn Intonation

Share:

Linguistic Society of America - A spoken language is more than just words and sounds. Speakers use changes in pitch and rhythm, known as prosody, to provide emphasis, show emotion, and otherwise add meaning to what they say. But a language does not need to be spoken to have prosody: sign languages, such as American Sign Language (ASL), use movements, pauses and facial expressions to achieve the same goals. In a study appearing today in the September 2015 issue of Language, three linguists look at intonation (a key part of prosody) in ASL and find that native ASL signers learn intonation in much the same way that users of spoken languages do.

Diane Brentari (University of Chicago), Joshua Falk (University of Chicago), and George Wolford (Purdue University) studied how deaf children (ages 5-8) who were native learners of ASL used intonational features like 'sign lengthening' and facial cues as they acquired ASL. They found that children learned these features in three stages of "appearance, reorganization, and mastery": accurately replicating their use in simpler contexts, attempting unsuccessfully at first to use them in more challenging contexts, then using them accurately in all contexts as they fully learn the rules of prosody. Previous research has shown that native learners of spoken languages acquire intonation following a similar pattern. Brentari et al. also found that young signers of ASL use certain intonational features with different frequencies than adult ASL signers.

This study, "The acquisition of prosody in American Sign Language", is the first comparative analysis of prosody in ASL between children and adults who are native ASL signers, and helps demonstrate the similarities in language acquisition between signed and spoken languages. This research may also make it easier to accurately transcribe certain linguistic units of ASL, which could benefit automatic ASL translation through motion-capture software. Brentari et al.'s research was supported by grants from the National Science Foundation and the University of Chicago's Center for Gesture, Sign, and Language.

Source: https://www.eurekalert.org/pub_releases/2015-09/lsoa-hsl092815.php

Rexton Reach - November 2024

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.