首页> 外文会议>First workshop on language grounding for robotics 2017 >Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction
【24h】

Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

机译:同情始于微笑,智慧始于一句话:在人机交互中使用多模式功能

获取原文
获取原文并翻译 | 示例

摘要

Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users' impressions of the robot after a conversation. We find that happiness in the user's recognised facial expression strongly correlates with likeabil-ity of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and inter-pretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learning-based adaptive human-robot dialogue systems.
机译:从人的面部表情或语音韵律中识别社交信号是人机交互研究中的热门研究主题。口语对话社区中还有很长的研究领域,即研究用户对对话特性的满意度。但是,很少有研究将多模态社交信号和在面对面的人机交互中检测到的语言特征与用户对机器人的感知相关联。在本文中,我们展示了人类用户不同的情感表情,再加上人类语音的韵律特征和人机对话的特征,如何与用户在对话后对机器人的印象相关联。我们发现,用户识别出的面部表情中的快乐与机器人的相似性密切相关,而与对话相关的功能(例如,人转的次数或每个机器人发声的句子数)与将机器人感知为智能相关。此外,我们表明,面部表情,情绪特征和韵律是与感知的机器人相似性和拟人化相关的人类等级的较好预测指标,而语言和非语言特征则更能预测感知到的机器人智能和可解释性。因此,这些特性将来可能会用作在线奖励信号,用于基于现场强化学习的自适应人机对话系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号