首页> 外文会议>IEEE Aerospace Conference >Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks
【24h】

Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks

机译:基于卷积神经网络的非合作航天器交会姿态估计

获取原文

摘要

On-board estimation of the pose of an uncooperative target spacecraft is an essential task for future on-orbit servicing and close-proximity formation flying missions. However, two issues hinder reliable on-board monocular vision based pose estimation: robustness to illumination conditions due to a lack of reliable visual features and scarcity of image datasets required for training and benchmarking. To address these two issues, this work details the design and validation of a monocular vision based pose determination architecture for spaceborne applications. The primary contribution to the state-of-the-art of this work is the introduction of a novel pose determination method based on Convolutional Neural Networks (CNN) to provide an initial guess of the pose in real-time on-board. The method involves discretizing the pose space and training the CNN with images corresponding to the resulting pose labels. Since reliable training of the CNN requires massive image datasets and computational resources, the parameters of the CNN must be determined prior to the mission with synthetic imagery. Moreover, reliable training of the CNN requires datasets that appropriately account for noise, color, and illumination characteristics expected in orbit. Therefore, the secondary contribution of this work is the introduction of an image synthesis pipeline, which is tailored to generate high fidelity images of any spacecraft 3D model. In contrast to prior techniques demonstrated for close-range pose determination of spacecraft, the proposed architecture relies on neither hand-engineered image features nor a-priori relative state information. Hence, the proposed technique is scalable to spacecraft of different structural and physical properties as well as robust to the dynamic illumination conditions of space. Through metrics measuring classification and pose accuracy, it is shown that the presented architecture has desirable robustness and scalable properties. Therefore, the proposed technique can be used to augment the current state-of-the-art monocular vision-based pose estimation techniques used in spaceborne applications.
机译:不合作目标宇宙飞船姿势的板载估计是未来轨道服务和近距离形成飞行任务的重要任务。然而,两个问题阻碍了可靠的板载单眼视觉的姿势估计:由于缺乏可靠的视觉特征和训练和基准测试所需的图像数据集的缺乏缺乏的照明条件的鲁棒性。为了解决这两个问题,这项工作详细介绍了用于空间播种应用的单眼视觉姿势确定架构的设计和验证。本作品最先进的主要贡献是引入基于卷积神经网络(CNN)的新型姿态确定方法,以便在板上实时猜测姿势。该方法涉及将姿势空间分离并训练CNN与对应于所得姿势标签的图像。由于CNN的可靠训练需要大量图像数据集和计算资源,因此必须在使用合成图像之前确定CNN的参数。此外,CNN的可靠训练需要适当地解释轨道中预期的噪声,颜色和照明特性的数据集。因此,这项工作的二次贡献是引入图像合成管道,这是根据任何航天器3D模型的高保真图像进行定制的。与用于近距离姿态确定的航天器的现有技术相反,所提出的架构既不既不既不是手工制造的图像特征,也不依赖于先验相对状态信息。因此,所提出的技术可扩展到不同结构和物理性质的航天器以及对空间的动态照明条件鲁棒。通过测量分类和姿势精度的度量,显示所呈现的架构具有理想的鲁棒性和可扩展性。因此,所提出的技术可用于增强本发明的现有技术的单像视觉基姿势估计技术,用于太空载发射应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号