首页> 外文会议>Conference on Medical Imaging 2008: Imaging Processing; 20080217-19; San Diego,CA(US) >A novel framework for multi-modal intensity-based similarity measures based on internal similarity
【24h】

A novel framework for multi-modal intensity-based similarity measures based on internal similarity

机译:基于内部相似度的多模式基于强度的相似度度量的新框架

获取原文
获取原文并翻译 | 示例

摘要

We present a novel framework for describing intensity-based multi-modal similarity measures. Our framework is based around a concept of internal, or self, similarity. Firstly the locations of multiple regions or patches which are "similar" to each other are identified within a single image. The term "similar" is used here to represent a generic intra-modal similarity measure. Then if we examine a second image in the same locations, and this image is registered to the first image, we should find that the patches in these locations are also "similar", though the actual features in the patches when compared between the images could be very different. We propose that a measure based on this principle could be used as an inter-modal similarity measure because, as the two images become increasingly misregistered then the patches within the second image should become increasingly dissimilar. Therefore, our framework results in an inier-modal similarity measure by using two intra-modal similarity measures applied separately within each image. In this paper we describe how popular multi-modal similarity measures such as mutual information can be described within this framework. In addition the framework has the potential to allow the formation of novel similarity measures which can register using regional information, rather than individual pixel/voxel intensities. An example similarity measure is produced and its ability to guide a registration algorithm is investigated. Registration experiments are carried out using three datasets. The pairs of images to be registered were specifically chosen as they were expected to challenge (i.e. result in misregistrations) standard intensity-based measures, such as mutual information. The images include synthetic data, cadaver data and clinical data and cover a range of modalities. Our experiments show that our proposed measure is able to achieve accurate registrations where standard intensity-based measures, such as mutual information, fail.
机译:我们提出了一个新颖的框架,用于描述基于强度的多模式相似性度量。我们的框架基于内部或自我相似性的概念。首先,在单个图像中识别彼此“相似”的多个区域或补丁的位置。这里使用术语“相似”来表示通用的模态内相似性度量。然后,如果我们检查相同位置的第二张图像,并将该图像注册到第一张图像,则应该发现这些位置的补丁也“相似”,尽管在图像之间进行比较时,补丁中的实际特征可能有很大的不同。我们建议基于此原理的度量可以用作模态间相似性度量,因为随着两个图像的配准越来越不正确,第二个图像中的色块应变得越来越不相似。因此,我们的框架通过使用分别在每个图像中应用的两个模态内相似性度量,来得出一个inier-modal相似性度量。在本文中,我们描述了如何在此框架内描述流行的多模式相似性度量,例如互信息。另外,该框架有可能允许形成可以使用区域信息而不是单个像素/体素强度进行配准的新颖相似性度量。产生了示例性相似性度量,并研究了其指导配准算法的能力。使用三个数据集进行配准实验。特别是选择了要注册的图像对,因为它们有望挑战(例如导致配准错误)基于强度的标准度量,例如相互信息。这些图像包括合成数据,尸体数据和临床数据,并涵盖了一系列的方式。我们的实验表明,我们提出的措施能够在基于强度的标准措施(例如互信息)失败的情况下实现准确的配准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号