【24h】

Learning Planning Operators in Real-World, Partially Observable Environments

机译:在部分可观察的实际环境中学习规划操作员

获取原文
获取原文并翻译 | 示例

摘要

We are interested in the development of activities in situated, embodied agents such as mobile robots. Central to our theory of development is means-ends analysis planning, and as such, we must rely on operator models that can express the effects of a robot's action in a dynamic, partially-observable environment. This paper presents a two-step process which employs clustering and decision tree induction to perform unsuper-vised learning of operator models from simple interactions between an agent and its environment. We report our findings with an implementation of this system on a Pioneer-1 mobile robot.
机译:我们对诸如移动机器人之类的定位,具体化代理中的活动的开发感兴趣。我们的发展理论的核心是均值-终点分析计划,因此,我们必须依靠能够表达机器人在动态,可部分观察的环境中的动作效果的操作员模型。本文提出了一个分为两步的过程,该过程采用聚类和决策树归纳法,通过代理与其环境之间的简单交互来执行操作员模型的无监督学习。我们通过在Pioneer-1移动机器人上实施此系统来报告我们的发现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号