Search

Menu

ID3: How does working memory affect our ability to understand sentences in difficult listening conditions?

When was the last time you were involved in a conversation with one person talking at a time, and no background noise?  In our everyday interactions, information conveyed by another speaker reaches our brains after overcoming a variety of adverse conditions.  The whirring of a fan, the conversations of colleagues, the chattering of children, all of these interfere with the target signal.  Although psycholinguistics has been studying speech perception and comprehension and untangling its different components for years, these studies have typically used optimal quiet listening conditions.  Furthermore, research in speech perception in adverse conditions has mainly focused on the perception of speech.  However, our task in everyday conversations is to understand the sentences that we perceive. Many people report that in noisy situations such as at the restaurant, they can hear the words but not necessarily make sense of the sentence.  One of the next steps in researching speech seems quite naturally to be the study of sentence comprehension in adverse conditions. 

Adverse conditions can lead to a degradation of the acoustic signal, which is referred to as “energetic masking”.  This occurs when the whirring of the fan blends in with the voice of the person you are trying to listen to, covering up a part of what you would like to hear.  When you are listening to someone speak while another person is also speaking (a competing talker), another type of masking, termed "informational masking", is added to the energetic masking.  Informational masking is broadly construed as "whatever is left after the degradation of the acoustic signal has been accounted for", and leads to a depletion of domain-general cognitive resources, such as memory and attention.  However, the cognitive factors at play have yet to be defined. 

The pictures below illustrate energetic and informational masking, where (A) and (B) correspond to the original signals, (C) and (D) correspond to the two signals with energetic masking alone, and (E) is the combination of (C) and (D), resulting in informational masking with energetic masking.

(A)         (B)

(C)  (D)

 (E)                                       

This research aims to tease apart the factors involved in informational masking, in particular when the load on working memory is increased, and its effect on the comprehension of different syntactic structures.  I would like to determine whether the same cognitive resources are involved in processing increasingly complex syntax in increasingly difficult listening conditions.  Participants will point to one of three characters corresponding to a sentence heard on headphones.  The target sentence will be presented in three conditions: quiet, with a competing talker (i.e. informational masking), or with speech-modulated noise (i.e. energetic masking).

Sentences will be syntactically complex object relatives (e.g. “Show the cat that the pig is licking”), less complex subject relatives (e.g. “Show the cat that is licking the pig”) or simple structures (e.g. “Show the cow with the red hat”).  As syntax gets more complex, the toll on working memory rises.  Similarly, we hypothesize that informational masking will involve more working memory than energetic masking, yielding higher reaction times and less accurate responses.

The current study aims to contribute to the growing field of speech in noise research by using a sentence comprehension paradigm not often used in this context, and by further specifying the definition of informational masking by attempting to quantify the contribution of working memory. A better understanding of these mechanisms will allow to construct more integrated models of speech perception, at the interface with cognition. Applications of our findings could be useful for educators, hearing aid or cochlear implant manufacturers and users, and anyone who wants to follow a conversation!

FacebookMySpaceTwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineTechnoratiLinkedinMixxRSS FeedPinterest
<a href=London public event" />
Join us for a fun event! "Good listeners and smooth talkers: Spoken communica-tion in a challenging world", 7.00pm, Tuesday 20 January, Royal Institution, London
Read more ...
<a href=The Big Listen!" />
Help researchers develop the next generation of hearing aids by taking "The Big Listen", a 5-minute online listening test developed as part of the INSPIRE project.
Read more ...

Log in to INSPIRE

Event calendar

September 2017
Mon Tue Wed Thu Fri Sat Sun
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

INSPIRE News

  • We would like to warmly invite you to join our Radboud Summer School course on "Multilingualism in the Wild" Radboud University Nijmegen, The Netherlands http://www.ru.nl/radboudsummerschool/ Dates: 10-14...

  • The INSPIRE workshop "Computational models of cognitive processes" will take place in Leuven, Belgium, from Wednesday 1 July to Saturday 4 July, 2015. Click here for workshop...

  • The INSPIRE winter school "Talker-listener interactions" will take place in London, England, from Tuesday 20 January to Friday 23 January, 2015. Click here for winter school information.

Go to top