Skip to content

Research

The overall aim of our research is to understand how the human brain combines expectations and sensory information to communicate. Our ability to successfully communicate with other people is an essential skill in everyday life. Therefore, unravelling how the human brain is able to derive meaning from acoustic speech signals and to recognize our communication partner based on seeing a face represents an important scientific endeavour.

Speech recognition depends on both the clarity of the acoustic input and on what we expect to hear. For example, in noisy listening conditions, listeners of the identical speech input can differ in their perception of what was said. Similarly for face recognition, brain responses to faces depend on expectations and do not simply reflect the presented facial features.

These findings for speech and face recognition are compatible with the more general view that perception is an active process in which incoming sensory information is interpreted with respect to expectations. The neural mechanisms supporting such integration of sensory signals and expectations, however, remain to be identified. Conflicting theoretical and computational models have been suggested for how, when, and where expectations and new sensory information are combined.

Predictions in speech perception

Speech perception is heavily influenced by our expectations about what will be said. In this review, we discuss the potential of multivariate analysis as a tool to understand the neural mechanisms underlying predictive processes in speech perception.

Read more
First, we discuss the advantages of multivariate approaches and what they have added to the understanding of speech processing from the acoustic-phonetic form of speech, over syllable identity and syntax, to its semantic content. Second, we suggest that using multivariate techniques to measure informational content across the hierarchically organised speech-sensitive brain areas might enable us to specify the mechanisms by which prior knowledge and sensory speech signals are combined. Specifically, this approach might allow us to decode how different priors, e.g. about a speaker’s voice or about the topic of the current conversation, are represented at different processing stages and how incoming speech is as a result differently represented.

Ufer, C. & Blank, H., (2023). Multivariate analysis of brain activity patterns as a tool to understand predictive processes in speech perception. Language, Cognition and Neuroscience.
doi.org/10.1080/23273798.2023.2166679

When different speakers articulate identical words, the physical properties of the produced sounds can vary substantially. Fundamental frequency (F0) and formant frequencies (FF), the two main parameters that discriminate between voices, also influence vowel perception. In this study we investiagted how we use speaker prior information when decoding the speech stream.

Read more
While, in everyday circumstances, we have no difficulties decoding a speaker’s intended word from the audio stream, listeners benefit from a general familiarity with target speakers when comprehending speech. We used a twofold approach to investigate the influence of speaker expectations on vowel perception. On the one hand, we showed how shifts in FF and F0 influenced vowel perception depending on context, i.e., when the FF and F0 voice characteristics of the speaker could or could not be anticipated. On the other hand, we observed that different expected speaker FF and F0 characteristics influence perception of identical, unshifted vowels in a contrastive manner. In conclusion, our findings support the view that knowledge about a speaker’s voice characteristics influences vowel perception.

Krumbiegel J, Ufer C, Blank H (2022) Influence of voice properties on vowel perception depends on speaker context. The Journal of the Acoustical Society of America 152:820–834. doi.org/10.1121/10.0013363

Perception inevitably depends on combining sensory input with prior expectations, particularly for identification of degraded input. However, the underlying neural mechanism by which expectations influence sensory processing is unclear. Predictive Coding suggest that the brain passes forward the unexpected part of the sensory input while expected properties are suppressed (Prediction Error). However, evidence to rule out the opposite and perhaps more intuitive mechanism, in which the expected part of the sensory input is enhanced (Sharpening), has been lacking.

Read more
We investigated the neural mechanisms by which sensory clarity and prior knowledge influence the perception of degraded speech. A univariate measure of brain activity obtained from functional magnetic resonance imaging (fMRI) was in line with both neural mechanisms (Prediction Error and Sharpening). However, combining multivariate fMRI measures with computational simulations allowed us to determine the underlying mechanism. Our key finding was an interaction between sensory input and prior expectations: For unexpected speech, increasing speech clarity increased the amount of information represented in sensory brain areas. In contrast, for speech that matched prior expectations, increasing speech clarity reduced the amount of this information. Our observations were uniquely simulated by a model of speech perception that included Prediction Errors.

Blank, H. & Davis, M. (2016). Prediction errors but not sharpened signals simulate multivoxel fMRI patterns during speech perception, PLOSBiology, 14(11)  doi.org/10.1371/journal.pbio.1002577

The ability to draw on past experience is important to keep up with a conversation, especially in noisy environments where speech sounds are hard to hear. However, these prior expectations can sometimes mislead listeners; convincing them that they heard something that a speaker did not actually say.

Read more
To investigate the neural underpinnings of speech misperception, we presented participants with pairs of written and degraded spoken words that were either identical, clearly different or similar-sounding. Reading and hearing similar sounding words (like kick followed by pick), led to frequent misperception.

Using fMRI, we found that misperception was associated with reduced activity in the left superior temporal sulcus a brain region critical for processing speech sounds. Furthermore, when perception of speech was more successful, this brain region represented the difference between prior expectations and heard speech (like the initial k/p in kick-pick).

Blank, H., Spangenberg, M., & Davis, M. (2018). Neural Prediction Errors Distinguish Perception and Misperception of Speech. The Journal of Neuroscience. 38 (27) 6076-6089.
https://doi.org/10.1523/JNEUROSCI.3258-17.2018

Predictions in face perception

In everyday life, we face our environment with several prior expectations about what we are going to encounter, for example, whom we are going to see most likely at a certain location. These expectations have to be weighted according to their probability, e.g., a student regularly entering our office will be expected with a higher probability than a shy colleague we rarely meet. In this study, we show that the human brain weights face priors according to their certainty in high-level face-sensitive regions.

Read more
We used functional resonance imaging (fMRI) in combination with multivariate methods to test whether the strength of face expectations can be detected alongside expected face identity in face-sensitive regions (i.e., OFA, FFA, and aTL) of the human brain. Participants used scene cues to predict face identities with different probabilities. We found evidence that representations of expected face identities were weighted according to their probability in the high-level face-sensitive aTL.

Blank, H., Alink, A., & Büchel, C. (2023). Multivariate functional neuroimaging analyses reveal that strength-dependent face expectations are represented in higher-level face-identity areas. Communications Biology, 6, 135. doi.org/10.1038/s42003-023-04508-8

By combining fMRI with diffusion-weighted imaging we could show that the brain is equipped with direct structural connections between face- and voice-recognition areas to activate learned associations of faces and voices even in unimodal conditions to improve person-identity recognition.

Read more
According to hierarchical processing models of person-identity recognition, information from faces and voices is only integrated at later stages after person-identity has been achieved. However, functional neuroimaging studies showed that the fusiform face area was activated by familiar voices during auditory-only speaker recognition. To test for direct structural connections between face- and voice-recognition areas, we localized voice-sensitive areas in anterior, middle, and posterior STS and face-sensitive areas in the fusiform gyrus. Probabilistic tractography revealed evidence for direct structural connections between these regions. These connections seemed to be functionally relevant because they were particularly strong between those areas that were engaged during processing of voice identity in anterior/middle STS in contrast to areas that process less identity-specific, acoustic features in posterior STS.

What kind of information is exchanged between these specialized areas during cross‐modal recognition of other individuals? To address this question, we used functional magnetic resonance imaging and a voice‐face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face‐sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas.

Blank, H., Anwander, A., & von Kriegstein, K. (2011). Direct structural connections between voice- and face-recognition areas. The Journal of Neuroscience, 31(36), 12906-12915.  https://doi.org/10.1523/JNEUROSCI.2091-11.2011

Blank, H., Kiebel, S. J. & von Kriegstein, K. (2015). How the human brain exchanges information across sensory modalities to recognize other people. Human Brain Mapping, 36(1), 324-39.  http://dx.doi.org/10.1002/hbm.22631

In a noisy environment it is often very helpful to see the mouth of the person you are speaking to. When our brain is able to combine information from different sensory sources, for example during lip-reading, speech comprehension is improved.

Read more
We investigated this phenomenon in more detail to uncover how visual and auditory brain areas work together during lip-reading. In the experiment, brain activity was measured using functional magnetic resonance imaging while participants heard short sentences. The participants then watched a short silent video of a person speaking. Using a button press, they indicated whether the sentence they had heard matched the mouth movements in the video. If the sentence did not match the video, a part of the brain network that combines visual and auditory information showed greater activity and there were increased connections between the auditory speech region and the STS. How strong the activation was depended on the lip-reading skill of participants: The stronger the activation, the more correct were responses. This effect seemed to be specific to the content of speech – it did not occur when the subjects had to decide if the identity of the voice and face matched.

Blank, H. & von Kriegstein, K. (2013). Mechanisms of enhancing visual-speech recognition by prior auditory information. Neuroimage, 65, 109-118.  http://dx.doi.org/10.1016/j.neuroimage.2012.09.047