首页> 外文期刊>IEEE Geoscience and Remote Sensing Letters >S3: A Spectral-Spatial Structure Loss for Pan-Sharpening Networks
【24h】

S3: A Spectral-Spatial Structure Loss for Pan-Sharpening Networks

机译:S3:泛锐网络的光谱 - 空间结构损耗

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, many deep-learning-based pan-sharpening methods have been proposed for generating high-quality pan-sharpened (PS) satellite images. These methods focused on various types of convolutional neural network (CNN) structures, which were trained by simply minimizing a spectral loss between network outputs and the corresponding high-resolution (HR) multi-spectral (MS) target images. However, owing to different sensor characteristics and acquisition times, HR panchromatic (PAN) and low-resolution MS image pairs tend to have large pixel misalignments, especially for moving objects in the images. Conventional CNNs trained with only the spectral loss with these satellite image data sets often produce PS images of low visual quality including double-edge artifacts along strong edges and ghosting artifacts on moving objects. In this letter, we propose a novel loss function, called a spectral-spatial structure (S3) loss, based on the correlation maps between MS targets and PAN inputs. Our proposed S3 loss can be very effectively used for pan-sharpening with various types of CNN structures, resulting in significant visual improvements on PS images with suppressed artifacts.
机译:最近,已经提出了许多基于深度学习的泛锐化方法,用于产生高质量的PAN削尖(PS)卫星图像。这些方法集中在各种类型的卷积神经网络(CNN)结构上,通过简单地减少网络输出和相应的高分辨率(HR)多光谱(MS)目标图像之间的频谱损耗来训练。然而,由于传感器特性和采集时间,HR Panchromic(PAN)和低分辨线MS图像对倾向于具有大的像素未对准,尤其是用于在图像中移动物体。培训的常规CNNS仅具有这些卫星图像数据集的频谱损耗通常产生低视觉质量的PS图像,包括沿着在移动物体上的强边缘和重影伪影的双边缘伪像。在这封信中,我们提出了一种新颖的损失函数,称为光谱空间结构(S3)损耗,基于MS目标和PAN输入之间的相关图。我们所提出的S3损耗可以非常有效地用于用各种类型的CNN结构进行泛锐化,导致具有抑制伪影的PS图像显着的视觉改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号