...
首页> 外文期刊>Cognition: International Journal of Cognitive Psychology >Gaze patterns reveal how situation models and text representations contribute to episodic text memory
【24h】

Gaze patterns reveal how situation models and text representations contribute to episodic text memory

机译:凝视图案揭示了如何模型和文本表示如何促成巨大的文本记忆

获取原文
获取原文并翻译 | 示例
           

摘要

When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play.
机译:在调用以前读取的内容时,在多大程度上将在多大程度上激活描述事件的情况模型与文本本身的内存表示?本研究旨在通过录制在查看空白屏幕的同时回忆之前读取的文本的参与者的眼部运动来解决这个问题。积累的研究体已经证明,在集内存器检索期间发生自发性眼睛运动,并且从这种凝视图案的固定位置与召回信息的粘合空间布局重叠到大程度。在这里,我们利用这种现象来研究与文本本身的粘合空间配置相对应的程度参与者的凝视图案与其中描述的粘合空间配置相对应。要召回的文本是场景描述,其中场景内容的空间配置被操纵为一致或不一致的文本本身的空间配置。结果表明,参与者的凝视图案更有可能与所描述的场景的粘合空间表示相对应,而不是用文本本身的探测器表示,而且还对这些空间的那些表示的贡献对文本内容敏感。这是首次演示,即眼部运动可以用于区分所记住的代表级别文本,并且调查结果为播放中的底层动态提供了新的洞察力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号