...
首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Ensemble Learning-Based Rate-Distortion Optimization for End-to-End Image Compression
【24h】

Ensemble Learning-Based Rate-Distortion Optimization for End-to-End Image Compression

机译:基于学习的基于学习的速率失真优化,用于端到端图像压缩

获取原文
获取原文并翻译 | 示例
           

摘要

End-to-end image compression using trained deep networks as encoding/decoding models has been developed substantially in the recent years. Previous work is limited in using a single encoding/decoding model, whereas we explore the usage of multiple encoding/decoding models as an ensemble. We propose several methods to obtain multiple models. First, we adopt the boosting strategy to train multiple networks with diversity as an ensemble. Second, we train an ensemble of multiple probability distribution models to reduce the distribution gap for efficient entropy coding. Third, we present a geometric transform-based self-ensemble method. The multiple models can be regarded as the multiple coding modes, similar to those in non-deep video coding schemes. We further adopt block-level model/mode selection at the encoder side to pursue rate-distortion optimization, where we use hierarchical block partitioning to improve the adaptation ability. Compared with single-model end-to-end compression, our proposed method improves the compression efficiency significantly, leading to 21% BD-rate reduction on the Kodak dataset, without increasing the decoding complexity. On the other hand, when keeping the same compression efficiency, our method can use much simplified decoding models, where the floating-point operations are reduced by 70%.
机译:近年来,使用培训的深网络作为编码/解码模型的端到端图像压缩基本上已经开发出来。使用单个编码/解码模型,以前的工作受到限制,而我们探讨了多个编码/解码模型作为集合的使用。我们提出了几种方法来获得多种模型。首先,我们采用促进策略来培训多个网络与集合的多样性。其次,我们训练多个概率分布模型的集合,以减少有效熵编码的分配差距。第三,我们介绍了一种基于几何变换的自体合奏方法。多种模型可以被视为多个编码模式,类似于非深度视频编码方案中的多种编码模式。我们在编码器侧进一步采用块级模型/模式选择以追求速率失真优化,在那里我们使用分层块分区来提高适应能力。与单模型端到端压缩相比,我们提出的方法显着提高了压缩效率,导致柯达数据集的21%BD速率降低,而不会增加解码复杂性。另一方面,当保持相同的压缩效率时,我们的方法可以使用大量简化的解码模型,其中浮点操作减少了70%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号