...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Multimodality image registration by maximization of quantitative-qualitative measure of mutual information
【24h】

Multimodality image registration by maximization of quantitative-qualitative measure of mutual information

机译:通过最大化互信息的定量定性度量来进行多模态图像配准

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This paper presents a novel image similarity measure, referred to as quantitative-qualitative measure of mutual information (Q-MI), for multimodality image registration. Conventional information measures, e.g., Shannon's entropy and mutual information (MI), reflect quantitative aspects of information because they only consider probabilities of events. In fact, each event has its own utility to the fulfillment of the underlying goal, which can be independent of its probability of occurrence. Thus, it is important to consider both quantitative (i.e., probability) and qualitative (i.e., utility) measures of information in order to fully capture the characteristics of events. Accordingly, in multimodality image registration, Q-MI should be used to integrate the information obtained from both the image intensity distributions and the utilities of voxels in the images. Different voxels can have different utilities, for example, in brain images, two voxels can have the same intensity value, but their utilities can be different, e.g., a white matter (WM) voxel near the cortex can have higher utility than a WM voxel inside a large uniform WM region. In Q-ML the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image. Since the voxels with higher utility values (or saliency values) contribute more in measuring Q-MI of the two images, the Q-MI-based registration method is much more robust, compared to conventional MI-based registration methods. Also, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. In this paper, the proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images. (C) 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
机译:本文提出了一种用于多模态图像配准的新颖的图像相似性度量,称为互信息定量定性度量(Q-MI)。传统的信息量度,例如香农的熵和互信息(MI),反映了信息的定量方面,因为它们仅考虑事件的概率。实际上,每个事件对于实现基本目标都有其自己的效用,而这些目标可以独立于其发生的可能性。因此,重要的是要同时考虑信息的定量(即概率)和定性(即效用)度量,以便充分捕获事件的特征。因此,在多模态图像配准中,应使用Q-MI集成从图像强度分布和图像中体素的效用获得的信息。不同的体素可以具有不同的效用,例如,在大脑图像中,两个体素可以具有相同的强度值,但是它们的效用可以不同,例如,皮质附近的白质(WM)体素可以具有比WM体素更高的效用。在统一的WM大区域内。在Q-ML中,可以根据从该图像的比例空间图计算出的区域显着性值来确定图像中每个体素的效用。由于具有较高效用值(或显着性值)的体素在测量两个图像的Q-MI方面贡献更大,因此与传统的基于MI的配准方法相比,基于Q-MI的配准方法更加健壮。而且,基于Q-MI的配准方法可以提供具有相对较大捕获范围的更平滑的配准功能。在本文中,所提出的Q-MI已被验证并应用于临床脑部图像的刚性配准,例如MR,CT和PET图像。 (C)2007模式识别学会。由Elsevier Ltd.出版。保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号