首页> 美国卫生研究院文献>Frontiers in Psychology >Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life
【2h】

Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

机译:婴儿视听语音整合过程中的大脑反应和表情行为可预测婴儿第二年的听觉语音理解能力

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants.
机译:已知在视听(AV)语音处理过程中使用视觉提示对儿童和有语言障碍的成年人效率不高,而已知的困难在低收入人群的儿童中更为普遍。在本研究中,我们追踪了一组经济上不同的小组,这些小组纵向调查了6-9个月至14-16个月大的37名婴儿。我们使用眼动追踪技术检查了在6-9个月大的婴儿进行AV语音处理期间,特别是在处理听觉和视觉语音提示的全同和不一致时,视觉注意力的个体差异是否可能指示他们后来的语言发展。这些6-9个月大的婴儿中有22名还参加了同一实验阶段的事件相关电位(ERP)AV任务。然后在14-16个月大时对语言发展进行了跟踪,使用了两种语言发展测度:学前语言量表和牛津交流发展清单。结果表明,那些在6-9个月时听觉语言处理效率较差的婴儿在14-16个月时的接受语言得分较低。相关分析显示,在6–9个月时面部扫描和ERP对视听不一致的刺激的反应方式与14–16个月时的语言发展显着相关。这些发现加深了对婴儿房室处理和相关外观行为的神经特征个体差异的理解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号