首页>
外文期刊>PLoS Biology
>Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex
【24h】
Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex
Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3–7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior—i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension. Author summary Combining different sources of information is fundamental to many aspects of behavior, from our ability to pick up a ringing mobile phone to communicating with a friend in a busy environment. Here, we have studied the integration of auditory and visual speech information. Our work demonstrates that integration relies upon two different representational interactions. One system conveys redundant information by representing information that is common to both auditory and visual modalities. The other system, which is supported by a different brain area, represents synergistic information by conveying greater information than the linear summation of individual auditory and visual information. Further, we show that these mechanisms are related to behavioral performance. This novel insight opens new ways to enhance our understanding of the mechanisms underlying multi-modal information integration, a fundamental aspect of brain function. These fresh insights have been achieved by applying to brain imaging data a recently developed methodology called the partial information decomposition. This methodology also provides a novel and principled way to quantify the interactions between representations of multiple stimulus features in the brain.
展开▼