...
首页> 外文期刊>IEEE Geoscience and Remote Sensing Letters >Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning
【24h】

Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning

机译:混合表示学习的红外可见图像融合方法

获取原文
获取原文并翻译 | 示例
           

摘要

For remote sensing image fusion, infrared and visible images have very different brightness due to their disparate imaging mechanisms, the result of which is that nontarget regions in the infrared image often affect the fusion of details in the visible image. This letter proposes a novel infrared and visible image fusion method basing hybrid representation learning by combining dictionary-learning-based joint sparse representation (JSR) and nonnegative sparse representation (NNSR). In the proposed method, different fusion strategies are adopted, respectively, for the mean image, which represents the primary energy information, and for the deaveraged image, which contains important detail features. Since the deaveraged image contains a large amount of high-frequency details information of the source image, JSR is utilized to sparsely and accurately extract the common and innovation features of the deaveraged image, thus, accurately merging high-frequency details in the deaveraged image. Then, the mean image represents low-frequency and overview features of the source image, according to NNSR, mean image is classified well-directed to different feature regions and then fused, respectively. Such proposed method, on the one hand, can eliminate the impact on fusion result suffering from very different brightness causing by different imaging mechanism between infrared and visible image; on the other hand, it can improve the readability and accuracy of the result fusion image. Experimental result shows that, compared with the classical and state-of-the-art fusion methods, the proposed method not only can accurately integrate the infrared target but also has rich background details of the visible image, and the fusion effect is superior.
机译:对于遥感图像融合,红外和可见图像由于其不同的成像机制而具有非常不同的亮度,其结果是红外图像中的非目标区域通常会影响可见图像中细节的融合。结合基于字典学习的联合稀疏表示(JSR)和非负稀疏表示(NNSR),提出了一种基于混合表示学习的红外与可见图像融合新方法。在所提出的方法中,分别针对代表主要能量信息的均值图像和包含重要细节特征的降级图像分别采用不同的融合策略。由于平均图像包含源图像的大量高频细节信息,因此利用JSR稀疏而准确地提取平均图像的共同特征和创新特征,从而在高频图像中准确地合并高频细节。然后,平均图像代表源图像的低频特征和概貌特征,根据NNSR,将平均图像很好地分类到不同的特征区域,然后分别进行融合。一方面,这种方法可以消除红外和可见光成像机理不同所带来的亮度差异很大对融合效果的影响。另一方面,它可以提高结果融合图像的可读性和准确性。实验结果表明,与传统的融合技术相比,该方法不仅可以准确地对红外目标进行融合,而且具有丰富的可见图像背景细节,融合效果优越。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号