Early Stage Researcher: Huarda Valdes-Laribi
Main supervisor: Prof. Sven Mattys
Second supervisor: Prof. Ewen MacDonald
Main host institution: University of York
Second host institution: Technical University of Denmark
When was the last time you were involved in a conversation with one person talking at a time, and no background noise? In our everyday interactions, information conveyed by another speaker reaches our brains after overcoming a variety of adverse conditions. The whirring of a fan, the conversations of colleagues, the chattering of children, all of these interfere with the target signal. Although psycholinguistics has been studying speech perception and comprehension and untangling its different components for years, these studies have typically used optimal quiet listening conditions. Furthermore, research in speech perception in adverse conditions has mainly focused on the perception of speech. However, our task in everyday conversations is to understand the sentences that we perceive. Many people report that in noisy situations such as at the restaurant, they can hear the words but not necessarily make sense of the sentence. One of the next steps in researching speech seems quite naturally to be the study of sentence comprehension in adverse conditions.
Adverse conditions can lead to a degradation of the acoustic signal, which is referred to as “energetic masking”. This occurs when the whirring of the fan blends in with the voice of the person you are trying to listen to, covering up a part of what you would like to hear. When you are listening to someone speak while another person is also speaking (a competing talker), another type of masking, termed "informational masking", is added to the energetic masking. Informational masking is broadly construed as "whatever is left after the degradation of the acoustic signal has been accounted for", and leads to a depletion of domain-general cognitive resources, such as memory and attention. However, the cognitive factors at play have yet to be defined.
The pictures below illustrate energetic and informational masking, where (A) and (B) correspond to the original signals, (C) and (D) correspond to the two signals with energetic masking alone, and (E) is the combination of (C) and (D), resulting in informational masking with energetic masking.
This research aims to tease apart the factors involved in informational masking, in particular when the load on working memory is increased, and its effect on the comprehension of different syntactic structures. I would like to determine whether the same cognitive resources are involved in processing increasingly complex syntax in increasingly difficult listening conditions. Participants will point to one of three characters corresponding to a sentence heard on headphones. The target sentence will be presented in three conditions: quiet, with a competing talker (i.e. informational masking), or with speech-modulated noise (i.e. energetic masking).
Sentences will be syntactically complex object relatives (e.g. “Show the cat that the pig is licking”), less complex subject relatives (e.g. “Show the cat that is licking the pig”) or simple structures (e.g. “Show the cow with the red hat”). As syntax gets more complex, the toll on working memory rises. Similarly, we hypothesize that informational masking will involve more working memory than energetic masking, yielding higher reaction times and less accurate responses.
The current study aims to contribute to the growing field of speech in noise research by using a sentence comprehension paradigm not often used in this context, and by further specifying the definition of informational masking by attempting to quantify the contribution of working memory. A better understanding of these mechanisms will allow to construct more integrated models of speech perception, at the interface with cognition. Applications of our findings could be useful for educators, hearing aid or cochlear implant manufacturers and users, and anyone who wants to follow a conversation!