...
首页> 外文期刊>IEEE Transactions on Medical Imaging >Multi-View Spatial Aggregation Framework for Joint Localization and Segmentation of Organs at Risk in Head and Neck CT Images
【24h】

Multi-View Spatial Aggregation Framework for Joint Localization and Segmentation of Organs at Risk in Head and Neck CT Images

机译:用于联合定位的多视图空间聚合框架和头部CT图像风险的器官的分割

获取原文
获取原文并翻译 | 示例
           

摘要

Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.
机译:来自头部和颈部(H&N)CT图像的风险(OAR)的器官的精确分割对于有效的H&N癌症放射治疗至关重要。然而,现有的深度学习方法通​​常不会以端到端的方式培训,即,它们独立地预先确定了器官分割之前的目标器官区域,导致相关任务之间的有限的信息共享,从而导致次优分割结果。此外,当常规分割网络用于同时分割所有桨的时,结果通常有利于小桨的大桨。因此,现有方法通常会为每个OAR培训特定模型,忽略不同分割任务之间的相关性。为了解决这些问题,我们提出了一种新的多视图空间聚合框架,用于使用H&N CT图像的多个桨的联合本地化和分割。我们框架的核心是一个拟议的兴趣区域(ROI)基础的细粒度代表卷积神经网络(CNN),用于从每个2D视图产生多桨概率图(即,轴向,冠状,和矢状的视图)CT图像。具体而言,我们的ROI基细粒度表示CNN(1)统一批发桨本地化和分割任务,并以端到端的方式列达它们,(2)通过新的ROI改善各种大小桨的分段结果 - 基于细粒度的表示。然后我们的多视图空间聚合框架然后在空间上聚合并组装生成的多视图多桨概率图以同时段分段所有OAR。我们使用两组H&N CT图像评估我们的框架,为各种尺寸的桨实现竞争和高强度的细分性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号