首页> 外文期刊>BMC Medical Informatics and Decision Making >Multi-modality self-attention aware deep network for 3D biomedical segmentation
【24h】

Multi-modality self-attention aware deep network for 3D biomedical segmentation

机译:多种式自我关注意识3D生物医学分割深度网络

获取原文
           

摘要

Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single type of medical images from the corresponding examining method. Considering of practical clinic application of the radiology examination for diseases, the multiple image examination methods are normally required for final diagnosis especially in some severe diseases like cancers. Therefore, by considering the cases of employing multi-modal images and exploring the effective multi-modality fusion based on deep networks, we do the research to make full use of complementary information of multi-modal images referring to the clinic experiences of radiologists in image analysis. Referring to the human radiologist diagnosis experience, we discuss and propose a new self-attention aware mechanism to improve the segmentation performance by paying the different attention on different modal images and different symptoms. Firstly, we propose a multi-path encoder and decoder deep network for 3D biomedical segmentation. Secondly, to leverage the complementary information among different modalities, we introduce a structure of attention mechanism called the Multi-Modality Self-Attention Aware (MMSA) convolution. Multi-modal images we used in the paper are different modalities of MR scanning images, which are input into the network separately. Then self-attention weight fusion of multi-modal features is performed with our proposed MMSA, which can adaptively adjust the fusion weights according to the learned contribution degree of different modalities and different features revealing the different symptoms from the labeled data. Experiments have been done on the public competition dataset BRATS-2015. The results show that our proposed method achieves dice scores of 0.8726, 0.6563, 0.8313 for the whole tumor, the tumor core and the enhancing tumor core, respectively. Comparing with the U-Net with SE block, the scores are increased by 0.0212,0.031,0.0304. We present a multi-modality self-attention aware convolution, which have better segmentation results based on the adaptive weighting fusion mechanism with exploiting the multiple medical image modalities. Experimental results demonstrate the effectiveness of our method and prominent application in the multi-modality fusion based medical image analysis.
机译:基于分割模型的深度学习已逐步应用于生物医学图像,并实现了3D生物医学分割的最先进的性能。然而,大多数现有的生物医学细分研究考虑到从相应的检查方法调整单一类型的医学图像的应用案例。考虑到疾病的放射学检查的实际诊所应用,通常需要多种图像检查方法进行最终诊断,尤其是在某些严重疾病等癌症中。因此,通过考虑采用多模态图像并根据深网络探索有效的多模态融合的情况,我们进行了研究,以充分利用多模态图像的互补信息,参考图像放射科学家的诊所经验分析。参考人类放射科医师诊断经验,我们讨论并提出了一种新的自我关注意识机制,通过向不同的模态图像和不同症状支付不同的关注来改善分割性能。首先,我们提出了一种用于3D生物医学分割的多路径编码器和解码器深网络。其次,为了利用不同方式之间的互补信息,我们介绍了一种称为多模态自我关注(MMSA)卷积的关注机制的结构。我们本文中使用的多模态图像是MR扫描图像的不同模式,它们分别输入网络。然后,利用我们所提出的MMSA对多模态特征进行自我注意重量融合,这可以根据不同方式的学习贡献程度和不同特征可自适应地调整融合权,揭示来自标记数据的不同症状。在公共竞争数据集Brats-2015上进行了实验。结果表明,我们的提出方法分别达到整个肿瘤,肿瘤核心和增强肿瘤核心0.8726,0.6563,0.8313的骰子分数。与U-Net与SE块相比,分数增加0.0212,0.031,0.0304。我们提出了一种多模态的自我关注意识卷积,其基于利用多种医学图像方式的自适应加权融合机制具有更好的分段结果。实验结果表明,我们的方法和突出应用在基于多种方式融合的医学图像分析中的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号