...
首页> 外文期刊>IEEE Transactions on Image Processing >CGNet: A Light-Weight Context Guided Network for Semantic Segmentation
【24h】

CGNet: A Light-Weight Context Guided Network for Semantic Segmentation

机译:CGNET:用于语义分割的轻量级上下文引导网络

获取原文
获取原文并翻译 | 示例
           

摘要

The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models follow the spirit of classification network and ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight and efficient network for semantic segmentation. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context effectively and efficiently, and further improves the joint feature with the global context. Based on the CG block, we develop CGNet which captures contextual information in all stages of the network. CGNet is specially tailored to exploit the inherent property of semantic segmentation and increase the segmentation accuracy. Moreover, CGNet is elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing light-weight segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing and multi-scale testing, the proposed CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters.
机译:在移动设备上应用语义分割模型的需求已经迅速增加。目前的最先进的网络具有大量的参数,因此不适合移动设备,而其他小的内存占用模型遵循分类网络的精神,忽略语义分割的固有特征。为了解决这个问题,我们提出了一种新颖的上下文引导网络(CGNet),它是用于语义分割的轻量级和有效的网络。我们首先提出了上下文引导(CG)块,其有效且有效地学习本地特征和周围的上下文的联合特征,并进一步改善了全局背景的联合特征。基于CG块,我们开发CGNET,它在网络的所有阶段中捕获上下文信息。 CGNET专门定制以利用语义分割的固有属性,并提高分割精度。此外,CGNET被精心设计为减少参数的数量并节省内存占用空间。在等效数量的参数下,所提出的CGNET显着优于现有的轻权分割网络。 CITYSCAPES和CAMVID数据集的广泛实验验证了所提出的方法的有效性。具体而言,没有任何后处理和多尺度测试,所提出的CGNet在City Capes上实现了64.8%的意思,参数小于0.5米。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号