首页> 外文会议>Conference on Medical Imaging: Image Processing >Feature-based Retinal Image Registration for Longitudinal Analysis of Patients with Age-related Macular Degeneration
【24h】

Feature-based Retinal Image Registration for Longitudinal Analysis of Patients with Age-related Macular Degeneration

机译:基于特征的视网膜图像配准,用于患者年龄相关性黄斑变性患者的纵向分析

获取原文

摘要

Purpose: Spatial alignment of longitudinally acquired retinal images is necessary for the development of image-based metrics identifying structural features associated with disease progression in diseases such as age-related macular degeneration (AMD). This work develops and evaluates a feature-based registration framework for accurate and robust registration of retinal images. Methods: Two feature-based registration approaches were investigated for the alignment of fundus auto-fluorescence images. The first method used conventional SIFT local feature descriptors to solve for the geometric transformation between two corresponding point-sets. The second method used a deep-learning approach with a network architecture mirroring the feature localization and matching process of the conventional method. The methods were validated using clinical images acquired in an ongoing longitudinal study of AMD and consisted of 75 patients (145 eyes) with 4 year follow up imaging. In the deep-learning method, 113 image pairs were used during training (with the ground truth provided from manually verified SIFT feature registration) and 20 image pairs were used for testing (with the ground truth provided from manual landmark annotation). Results: Conventional method using SIFT features demonstrated target registration error (mean ± std) = 0.05 ± 0.04 mm, substantially improving the alignment from the initialization with error = 0.34 ± 0.22 mm. The deep-learning method, on the other hand, exhibited error = 0.10 ± 0.07 mm. While both methods improved upon the initial misalignment, SIFT method showed the best overall geometric accuracy. However, deep-learning method exhibited robust performance (error = 0.15 ± 0.09 mm) in the 7% of cases that SIFT method exhibited failures (error = 3.71 ± 6.36 mm). Conclusion: While both methods demonstrated successful performance, SIFT method exhibited the best overall geometric accuracy whereas deep-learning method was superior in terms of robustness. Achieving accurate and robust registration is essential in large-scale studies investigating factors underlying retinal disease progression such as in AMD.
机译:目的:纵向获取的视网膜图像的空间对准是鉴定与疾病进展相关的基于图像的度量的基于图像的度量,例如年龄相关性黄斑变性(AMD)。这项工作开发和评估了基于特征的注册框架,用于视网膜图像的准确和强大的登记。方法:研究了两个基于特征的登记方法,用于对齐眼底自动荧光图像。第一方法使用传统的SIFT本地特征描述符来解决两个对应点集之间的几何变换。第二种方法使用具有网络架构的深度学习方法,镜像传统方法的特征定位和匹配过程。使用在AMD的持续纵向研究中获得的临床图像验证了该方法,并由75名患者(145只眼)组成,随访4年。在深度学习方法中,在训练期间使用113个图像对(通过手动验证的SIFT特征注册提供的地面真理)和20个图像对进行测试(具有从手动地标注释提供的地面真理)。结果:使用SIFT特征的常规方法显示目标注册误差(平均值±STD)= 0.05±0.04 mm,从误差初始化初始化= 0.34±0.22 mm。另一方面,深度学习方法表现出误差= 0.10±0.07 mm。虽然两种方法改善了初始未对准,但SIFT方法显示了最佳的整体几何精度。然而,在SIFT方法表现出故障(误差= 3.71±6.36mm)的情况下,深学习方法在7%的情况下表现出强大的性能(误差= 0.15±0.09 mm)。结论:虽然两种方法都表现出成功的性能,但SIFT方法表现出最佳的整体几何精度,而深度学习方法在鲁棒性方面优越。实现准确和稳健的注册是大规模研究的必要条件,调查视网膜疾病进展等因素如AMD。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号