...
首页> 外文期刊>Medical image analysis >Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries
【24h】

Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries

机译:视频辅助手术中刚性零件混合物模型的仪器检测和姿态估计

获取原文
获取原文并翻译 | 示例
           

摘要

Localizing instrument parts in video-assisted surgeries is an attractive and open computer vision problem. A working algorithm would immediately find applications in computer-aided interventions in the operating theater. Knowing the location of tool parts could help virtually augment visual faculty of surgeons, assess skills of novice surgeons, and increase autonomy of surgical robots. A surgical tool varies in appearance due to articulation, viewpoint changes, and noise. We introduce a new method for detection and pose estimation of multiple non-rigid and robotic tools in surgical videos. The method uses a rigidly structured, bipartite model of end-effector and shaft parts that consistently encode diverse, pose-specific appearance mixtures of the tool. This rigid part mixtures model then jointly explains the evolving tool structure by switching between mixture components. Rigidly capturing end-effector appearance allows explicit transfer of keypoint meta-data of the detected components for full 2D pose estimation. The detector can as well delineate precise skeleton of the end-effector by transferring additional keypoints. To this end, we propose effective procedure for learning such rigid mixtures from videos and for pooling the modeled shaft part that undergoes frequent truncation at the border of the imaged scene. Notably, extensive diagnostic experiments inform that feature regularization is a key to fine-tune the model in the presence of inherent appearance bias in videos. Experiments further illustrate that estimation of end effector pose improves upon including the shaft part in the model. We then evaluate our approach on publicly available datasets of in-vivo sequences of non-rigid tools and demonstrate state-of-the-art results. (C) 2018 Elsevier B.V. All rights reserved.
机译:在视频辅助手术中本地化仪器零件是一个有吸引力和开放的计算机视觉问题。工作算法将立即在操作剧院中找到计算机辅助干预的应用。了解工具部件的位置可以帮助实际上增加外科医生的视觉学院,评估新手外科医生的技能,并增加外科机器人的自治。由于铰接,视点变化和噪音,外科手术工具在外观上变化。我们介绍了一种新的手术视频中多个非刚性和机器人工具的检测和姿态估算方法。该方法采用刚性结构的二分型末端效应器和轴部件模型,其始终如一地编码工具的多样化,姿态特异性的外观混合物。然后,这种刚性部件混合物模型通过在混合物组分之间切换来共同解释演化的工具结构。刚性捕获的末端效应器外观允许为完整的2D姿态估计明确地传输检测到的组件的关键点元数据。通过转移额外的关键点,检测器也可以描绘末端效应器的精确骨架。为此,我们提出了从视频中学习这种刚性混合物的有效过程,并汇集在成像场景边界处经历经常截断的模型轴部分。值得注意的是,广泛的诊断实验可通知,功能正则化是在视频中存在固有的外观偏差的情况下微调模型的关键。实验进一步说明了终效应器姿势的估计改善了在模型中的轴部件。然后,我们在非刚性工具的可公开数据集上评估我们的方法,并展示最先进的结果。 (c)2018 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号