...
首页> 外文期刊>Journal of Applied Remote Sensing >Semantic segmentation of multisensor remote sensing imagery with deep ConvNets and higher-order conditional random fields
【24h】

Semantic segmentation of multisensor remote sensing imagery with deep ConvNets and higher-order conditional random fields

机译:深扫描网和高阶条件随机字段的多传感器遥感图像的语义分割

获取原文
获取原文并翻译 | 示例
           

摘要

Aerial images acquired by multiple sensors provide comprehensive and diverse information of materials and objects within a surveyed area. The current use of pretrained deep convolutional neural networks (DCNNs) is usually constrained to three-band images (i.e., RGB) obtained from a single optical sensor. Additional spectral bands from a multiple sensor setup introduce challenges for the use of DCNN. We fuse the RGB feature information obtained from a deep learning framework with light detection and ranging (LiDAR) features to obtain semantic labeling. Specifically, we propose a decision-level multisensor fusion technique for semantic labeling of the very-high-resolution optical imagery and LiDAR data. Our approach first obtains initial probabilistic predictions from two different sources: one from a pretrained neural network fine-tuned on a three-band optical image, and another from a probabilistic classifier trained on LiDAR data. These two predictions are then combined as the unary potential using a higher-order conditional random field (CRF) framework, which resolves fusion ambiguities by exploiting the spatial-contextual information. We utilize graph cut to efficiently infer the final semantic labeling for our proposed higher-order CRF framework. Experiments performed on three benchmarking multisensor datasets demonstrate the performance advantages of our proposed method. (C) The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License.
机译:由多个传感器获取的空中图像提供受测量区域内的材料和物体的全面和多样化的信息。目前使用预训预卷积的深度神经网络(DCNN)通常被限制为从单个光学传感器获得的三带图像(即RGB)。来自多个传感器设置的附加光谱带引起了使用DCNN的挑战。我们融合了从深度学习框架获得的RGB特征信息,以光检测和测距(LIDAR)功能来获得语义标记。具体地,我们提出了一种决策级多传感器融合技术,用于非常高分辨率光学图像和LIDAR数据的语义标记。我们的方法首先从两个不同的来源获得初始概率预测:一个来自在三带光学图像上微调的预先调谐的预磨牙网络,另一个来自在LIDAR数据上训练的概率分类器。然后将这两种预测作为使用高阶条件随机字段(CRF)框架组合为一元潜力,这通过利用空间上下文信息来解析融合歧义。我们利用图表切割以有效地推断出我们提出的高阶CRF框架的最终语义标签。在三个基准测试多传感器数据集上执行的实验证明了我们所提出的方法的性能优势。 (c)作者。由SPIE出版,根据创意公约归因于3.0未受到的许可证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号