首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Deep Reinforcement Learning-Based Automatic Exploration for Navigation in Unknown Environment
【24h】

Deep Reinforcement Learning-Based Automatic Exploration for Navigation in Unknown Environment

机译:基于深度加强学习的自动探索未知环境中的导航

获取原文
获取原文并翻译 | 示例
           

摘要

This paper investigates the automatic exploration problem under the unknown environment, which is the key point of applying the robotic system to some social tasks. The solution to this problem via stacking decision rules is impossible to cover various environments and sensor properties. Learning-based control methods are adaptive for these scenarios. However, these methods are damaged by low learning efficiency and awkward transferability from simulation to reality. In this paper, we construct a general exploration framework via decomposing the exploration process into the decision, planning, and mapping modules, which increases the modularity of the robotic system. Based on this framework, we propose a deep reinforcement learning-based decision algorithm that uses a deep neural network to learning exploration strategy from the partial map. The results show that this proposed algorithm has better learning efficiency and adaptability for unknown environments. In addition, we conduct the experiments on the physical robot, and the results suggest that the learned policy can be well transferred from simulation to the real robot.
机译:本文调查了未知环境下的自动探索问题,这是将机器人系统应用于某些社会任务的关键。通过堆叠决策规则对此问题的解决方案是不可能涵盖各种环境和传感器属性。基于学习的控制方法适用于这些方案。然而,这些方法受到低学习效率和从模拟到现实的尴尬可转移性损坏。在本文中,我们通过将探索过程分解为决策,规划和映射模块来构建一般探索框架,这增加了机器人系统的模块化。基于此框架,我们提出了一种深度增强学习的决策算法,该算法利用深神经网络从部分地图学习探索策略。结果表明,该算法具有更好的学习效率和对未知环境的适应性。此外,我们对物理机器人进行实验,结果表明,学习的政策可以从模拟到真正的机器人。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号