Research

The overall aim of our research is to understand how the human brain combines expectations and sensory information to communicate. Our ability to successfully communicate with other people is an essential skill in everyday life. Therefore, unravelling how the human brain is able to derive meaning from acoustic speech signals and to recognize our communication partner based on seeing a face represents an important scientific endeavour.

Speech recognition depends on both the clarity of the acoustic input and on what we expect to hear. For example, in noisy listening conditions, listeners of the identical speech input can differ in their perception of what was said. Similarly for face recognition, brain responses to faces depend on expectations and do not simply reflect the presented facial features.

These findings for speech and face recognition are compatible with the more general view that perception is an active process in which incoming sensory information is interpreted with respect to expectations. The neural mechanisms supporting such integration of sensory signals and expectations, however, remain to be identified. Conflicting theoretical and computational models have been suggested for how, when, and where expectations and new sensory information are combined.

Prediction errors during speech perception

Perception inevitably depends on combining sensory input with prior expectations. This is particularly critical for identification of degraded input. However, the underlying neural mechanism by which expectations influence sensory processing is unclear. Recent theories of Predictive Coding suggest that the brain passes forward the unexpected part of the sensory input while expected properties are suppressed (Prediction Error). However, evidence to rule out the opposite and perhaps more intuitive mechanism, in which the expected part of the sensory input is enhanced or sharpened (Sharpening), has been lacking.

Read more
We investigated the neural mechanisms by which sensory clarity and prior knowledge influence the perception of degraded speech. A univariate measure of brain activity obtained from functional magnetic resonance imaging (fMRI) was in line with both neural mechanisms (Prediction Error and Sharpening). However, combining multivariate fMRI measures with computational simulations allowed us to determine the underlying mechanism. Our key finding was an interaction between sensory input and prior expectations: For unexpected speech, increasing speech clarity increased the amount of information represented in sensory brain areas. In contrast, for speech that matched prior expectations, increasing speech clarity reduced the amount of this information. Our observations were uniquely simulated by a model of speech perception that included Prediction Errors.

Blank, H. & Davis, M. (2016). Prediction errors but not sharpened signals simulate multivoxel fMRI patterns during speech perception, PLOSBiology, 14(11)  http://dx.doi.org/10.1371/journal.pbio.1002577

Slips of the Ear: When Knowledge Deceives Perception

The ability to draw on past experience is important to keep up with a conversation, especially in noisy environments where speech sounds are hard to hear. However, these prior expectations can sometimes mislead listeners; convincing them that they heard something that a speaker did not actually say.

Read more
To investigate the neural underpinnings of speech misperception, we presented participants with pairs of written and degraded spoken words that were either identical, clearly different or similar-sounding. Reading and hearing similar sounding words (like kick followed by pick), led to frequent misperception.

Using fMRI, we found that misperception was associated with reduced activity in the left superior temporal sulcus a brain region critical for processing speech sounds. Furthermore, when perception of speech was more successful, this brain region represented the difference between prior expectations and heard speech (like the initial k/p in kick-pick).

Blank, H., Spangenberg, M., & Davis, M. (2018). Neural Prediction Errors Distinguish Perception and Misperception of Speech. The Journal of Neuroscience. 38 (27) 6076-6089.
https://doi.org/10.1523/JNEUROSCI.3258-17.2018

Direct structural connections between face and voice ares

By combining fMRI with diffusion-weighted imaging we could show that the brain is equipped with direct structural connections between face- and voice-recognition areas to activate learned associations of faces and voices even in unimodal conditions to improve person-identity recognition.

Read more
According to hierarchical processing models of person-identity recognition, information from faces and voices is only integrated at later stages after person-identity has been achieved. However, functional neuroimaging studies showed that the fusiform face area was activated by familiar voices during auditory-only speaker recognition. To test for direct structural connections between face- and voice-recognition areas, we localized voice-sensitive areas in anterior, middle, and posterior STS and face-sensitive areas in the fusiform gyrus. Probabilistic tractography revealed evidence for direct structural connections between these regions. These connections seemed to be functionally relevant because they were particularly strong between those areas that were engaged during processing of voice identity in anterior/middle STS in contrast to areas that process less identity-specific, acoustic features in posterior STS.

What kind of information is exchanged between these specialized areas during cross‐modal recognition of other individuals? To address this question, we used functional magnetic resonance imaging and a voice‐face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face‐sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas.

Blank, H., Anwander, A., & von Kriegstein, K. (2011). Direct structural connections between voice- and face-recognition areas. The Journal of Neuroscience, 31(36), 12906-12915.  https://doi.org/10.1523/JNEUROSCI.2091-11.2011

Blank, H., Kiebel, S. J. & von Kriegstein, K. (2015). How the human brain exchanges information across sensory modalities to recognize other people. Human Brain Mapping, 36(1), 324-39.  http://dx.doi.org/10.1002/hbm.22631

Lipreading: How we “hear” with our eyes

In a noisy environment it is often very helpful to see the mouth of the person you are speaking to. When our brain is able to combine information from different sensory sources, for example during lip-reading, speech comprehension is improved.

Read more
We investigated this phenomenon in more detail to uncover how visual and auditory brain areas work together during lip-reading. In the experiment, brain activity was measured using functional magnetic resonance imaging while participants heard short sentences. The participants then watched a short silent video of a person speaking. Using a button press, they indicated whether the sentence they had heard matched the mouth movements in the video. If the sentence did not match the video, a part of the brain network that combines visual and auditory information showed greater activity and there were increased connections between the auditory speech region and the STS. How strong the activation was depended on the lip-reading skill of participants: The stronger the activation, the more correct were responses. This effect seemed to be specific to the content of speech – it did not occur when the subjects had to decide if the identity of the voice and face matched.

Blank, H. & von Kriegstein, K. (2013). Mechanisms of enhancing visual-speech recognition by prior auditory information. Neuroimage, 65, 109-118.  http://dx.doi.org/10.1016/j.neuroimage.2012.09.047