...
首页> 外文期刊>Artificial intelligence >Transferring skills to humanoid robots by extracting semantic representations from observations of human activities
【24h】

Transferring skills to humanoid robots by extracting semantic representations from observations of human activities

机译:通过从人类活动的观察中提取语义表示,将技能转移到类人机器人

获取原文
获取原文并翻译 | 示例
           

摘要

In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator's behavior via semantic representations. This abstraction from observations captures the essence" of the activity, thereby indicating which aspect of the demonstrator's actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions,i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions,different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology based knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub/for achieving similar goals to the human demonstrator online.
机译:在这项研究中,我们提出了一个框架,该框架使用语义表示从观察中推断人类活动。所提出的框架可以用来解决将任务和技能转移给类人机器人的难题。我们提出了一种方法,该方法允许机器人通过语义表示获得并确定对示威者行为的更高层次的理解。从观察中获得的抽象抓住了活动的本质,从而指示了演示者的行为的哪个方面应被执行以完成所需的活动。这样,就人体运动和物体特性而言,就获得了有意义的语义描述。 ,我们验证了在不同条件下获得的语义规则,即三种不同且复杂的厨房活动:1)做薄饼; 2)做三明治; 3)摆好桌子,我们给出了定量和定性的结果,表明没有任何进一步的培训,我们的系统都可以处理时间限制,多个参与者对同一任务的不同执行方式以及不同的标记策略,这意味着从一种情况获得的规则即使在新情况下也仍然有效,这表明表示不取决于执行的任务,结果表明我们的系统正确识别了人类行为大约有87.44%的患者实时进行rs,这比随机参与者识别另一个人的行为(约76.68%)还要好。特别地,所获取的语义规则可以用于有效地改善基于本体的知识表示的动态增长。因此,可以在不同的演示和约束条件之间灵活地使用此方法,以推断并实现与观察到的目标相似的目标。此外,这项研究中引入的推理能力已集成到人形机器人iCub /的联合空间控制回路中,用于实现与在线人类演示者相似的目标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号