More than words: How communication signals affect what you hear 

A graphic showing two people speaking to each other with different countries' flags in between, indicating their speaking in different languages.

“Dunno.” 

It’s not really a word, much less a sentence. Yet if a conversational partner grunted that at you after you asked whether it’s cold outside, intuitively, you know what they said: “I don’t know.” 

Well, you’d understand that if English were your first language. If it’s not, no matter how fluent you are, the meaning of that grunt isn’t intuitive, especially if it is the first time you’ve heard it. Figuring out “dunno” requires looking at context and mentally expanding on the reduced communication to understand what’s being communicated. 

This phenomenon, called reduction, is common in conversational speech—so common, in fact, that most of us don’t notice it’s happening, and researchers are just starting to investigate the role reduction plays in communication. A new study, led by University of Arizona researcher Natasha Warner and co-authored by Benjamin Tucker, a professor of speech science in the Department of Communication Sciences and Disorders at Northern Arizona University, examines how non-native listeners hear and interpret these reductions, including figuring out verb tense and sorting through relevant and less relevant context.  

“The speech signal is packed with lots of information, and a lot of that information is redundant,” Tucker said. “It’s why we can communicate when there’s lots of noise around—we can default to other cues.” 

The researchers looked at native Dutch speakers who speak and understand English. Many are fluent in English and have advanced understanding of grammar, vocabulary, complicated sentences and the varied complexities of their second language. They recorded non-native listeners’ responses to native reductions of words and observed what cues the non-native listeners used to fill in the understanding gap.  

The short answer? Context. If you’re asking a question seeking information, “I don’t know” is an expected answer, so its abbreviated form is easier to interpret. 

Curious about reductions in English? Hear samples from Tucker’s lab or get a hilarious example from this comic strip. Do you know what they’re saying?

But they also found that some factors, such as use of the conversation filler “like,” helped listeners determine verb tense. If a speaker starts a sentence with “he’s,” it’s not immediately obvious if that is short for “he is” or “he was.” But the more “like” made its way into a conversation, the more likely listeners were to correctly interpret it as a story happening in the past. 

 There was also an X factor of sorts that the researchers identified. When the context was ambiguous, most of the non-native listeners still identified the correct usage most of the timebecause they listen differently than native speakers. Why? Tucker isn’t sure. 

“There’s something left in the signal that I can’t see, that I can’t figure out, that listeners were able to recover,” he said.  

What this means for other languages 

Benjamin Tucker recording conversations in a sound lab.
Benjamin Tucker recording conversations in a sound lab.

Every language behaves in different ways—different grammar rules, sentence construction, filler words, tone and so on. People also learn different cues based on their native languages—cues that aren’t always easy to identify for the speaker or the researcher. They’re almost innate; babies begin learning their native language from day 1, when their parents start whispering baby talk.  

“We can never get away from our native language,” Tucker said. “Your perception is highly influenced by your native language, so non-native listeners are going to use cues that are relevant in their language but not relevant in the second language.” 

This also means that Dutch speakers use different cues listening to English than Spanish speakers use listening to English and different cues than Japanese speakers use when listening to English. They pick up different signals—signals that go unnoticed by native English speakers and listeners. 

What this means for language learning 

Likely not much for learning a second language, Tucker said, but it does have some interesting implications for working with people who have speech and communication disorders. Because there’s limited research around conversational speech, researchers want to learn how this clinical experience can translate for patients when they leave the clinic. For example, for a child learning to say their Rs, how do they deal with that in the real world? 

The same is true for Parkinson’s patients and others who are experiencing limited speech because of another condition. 

One of the next projects for Tucker, whose research overlaps between applied linguistics and communications disorders, is looking at differences in pronunciation in Spanish words among three types of speakers: native Spanish speakers, Spanish language learners and heritage speakers of Spanish, or those for whom Spanish is a first language but they are now more comfortable speaking English. 

Northern Arizona University Logo

Heidi Toth | NAU Communications
(928) 523-8737 | heidi.toth@nau.edu

NAU Communications