首页> 外文学位 >Dynamic facial expressions in American Sign Language: Behavioral, neuroimaging, and facial-coding analyses for deaf and hearing subjects.
【24h】

Dynamic facial expressions in American Sign Language: Behavioral, neuroimaging, and facial-coding analyses for deaf and hearing subjects.

机译:美国手语中的动态面部表情:针对聋人和听力对象的行为,神经影像和面部编码分析。

获取原文
获取原文并翻译 | 示例

摘要

The aim of this study is to determine how Deaf signers (DS) and hearing non-signers (HNS) categorize emotional, American Sign Language (ASL) grammar and non-emotional non-grammatical (NENG) facial expressions, based on video clips showing only the face. The clips include six manifestations of 20 sentences (neutral, angry, surprise, quizzical, yes/no questions and wh-question (questions using who, what, etc.] and were chosen because angry, quizzical, and wh-question look similar, as do yes/no question and surprise expressions.; Results of the initial behavioral study (Chapter Two) show that HNS accurately categorize emotional and NENG expressions, but not ASL question faces. Subjects don't confuse ASL expressions with superficially similar emotional or NENG expressions.; Chapter Three presents results of DS and HNS for a redesigned stimulus set. Both groups accurately categorize emotional and neutral expressions bit frequently mislabel quizzical expressions “neutral.” Subjects categorize most ASL question faces correctly but make errors according to feature similarities, calling wh-questions “quizzical,” and “angry,” and categorizing yes/no questions as “surprise.” DS me more confident than HNS across all expression types. Differences in HNS performance between the two stimulus sets likely result from their ability to match the more subtle second set of ASL question expressions to familiar templates of question faces in English.; Chapter Four presents a new methodology for coding facial expressions. Data for movement of every facial feature per expression type averaged over 20 samples each show that emotional expressions have faster onsets of movement in features shared with grammatical or NENG expressions. Certain head movements only occur in specific expression types. These data show that dynamic differences distinguish expression types, despite static feature similarities.; Chapter Five presents an fMRI study showing activation in the right superior temporal gyrus and right interior frontal lobule, supporting previously untested hypotheses that these regions process facial expressions in ASL. Both groups show language activation in left superior temporal gyrus (STG). DS have more activation m STG than HNS, because of increased ASL language competence. Language activation in HNS may be related to task indicating that the stimuli represent sentence No differential activation patterns for different stimulus types were detected.
机译:这项研究的目的是根据视频片段显示,确定聋哑人(DS)和非听力正常人(HNS)如何将情感,美国手语(ASL)语法和非情感非语法(NENG)面部表情分类只有脸。剪辑包括20种句子的六种表现形式(中性,愤怒,惊奇,奇怪,是/否问题和问问题(使用谁,什么等的问题)),之所以被选中是因为愤怒,疑问和问问题看起来很相似,最初的行为研究(第二章)的结果表明,HNS可以准确地将情绪和NENG表情分类,而不能将ASL疑问面孔分类。受试者不要将ASL表情与表面相似的情绪或NENG混淆第三章介绍了DS和HNS对重新设计的刺激集的研究结果,两组人都将情感和中性表情准确地分类了,但经常误以为是中性的古怪表情,受试者对大多数ASL问题的面孔进行了正确分类,但根据特征相似性做出了错误,称为在“所有问题”中,“疑问”和“生气”这两个问题将是/否问题归类为“惊奇”。在所有表达类型上,我比HNS更加自信。两个刺激集之间在HNS方面的表现可能是由于它们能够将更细微的第二组ASL问题表达与英语中常见的问题脸模板相匹配的能力所致。第四章介绍了一种编码面部表情的新方法。每种表情类型每个面部特征的运动数据平均超过20个样本,这些数据表明,情感表达在与语法或NENG表情共享的特征中具有更快的运动开始时间。某些头部运动仅在特定的表情类型中发生。这些数据表明,尽管静态特征相似,动态差异仍可以区分表达类型。第五章介绍了一项功能磁共振成像研究,该研究显示了右颞上回和右内侧额叶的激活,支持了以前未经检验的假设,即这些区域处理ASL中的面部表情。两组均显示左上颞回(STG)中的语言激活。由于提高了ASL语言能力,DS在STG中的激活比在HNS中更多。 HNS中的语言激活可能与指示刺激表示句子的任务有关,未检测到针对不同刺激类型的不同激活模式。

著录项

  • 作者

    Grossman, Ruth Bergida.;

  • 作者单位

    Boston University.;

  • 授予单位 Boston University.;
  • 学科 Language Linguistics.; Psychology Cognitive.; Biology Neuroscience.
  • 学位 Ph.D.
  • 年度 2001
  • 页码 282 p.
  • 总页数 282
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 语言学;心理学;神经科学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号