首页> 外文学位 >Context quantization for adaptive entropy coding in image compression.
【24h】

Context quantization for adaptive entropy coding in image compression.

机译:用于图像压缩中的自适应熵编码的上下文量化。

获取原文
获取原文并翻译 | 示例

摘要

Context based adaptive entropy coders are used in newer compression standards to achieve rates that are asymptotically close to the source entropy: separate arithmetic coders are used for a large number of possible conditioning classes. This greatly reduces the amount of sample data available for learning. To combat this problem, which is referred as the context dilution problem in the literature, one needs to balance the benefit of using high-order context modeling and the learning cost associated with context dilution.; In the first part of this dissertation, we propose a context quantization method to attack the context dilution problem for non-binary source. It begins with a large number of conditioning classes and then uses a clustering procedure to reduce the number of contexts into a desired size. The main operational difficulty in practice is how to describe the complex partition of the context space. To deal with this problem, we present two novel methods, coarse context quantization (CCQ) and entropy coded state sequence (ECSS), for efficiently describing the context book, which completely specifies the context quantizer mappings information.; The second part of this dissertation considers binarization of non-binary sources. Same as non-binary source, the cost of sending the complex context description as side information is very high. Up to now, all the context quantizers are designed off-line and being optimized with respect to the statistics of the training set. The problem of handling the mismatch between the training set and an input image has remained largely untreated. We propose three novel schemes, minimum description length, image dependent and minimum adaptive code length, to deal with this problem. The experimental results show that our approach outperforms the JBIG and JBIG2 standard with peak compression improvement of 24% and 11% separately on the chosen set of halftone images.; In the third part of this dissertation, we extend our study to the joint design of both quantizers and entropy coders. We propose a context-based classification and adaptive quantization scheme, which essentially produce a finite state quantizer and entropy coder with the same procedure.; Keywords. context, entropy coding, context quantization, image compression.
机译:在更新的压缩标准中使用基于上下文的自适应熵编码器来实现渐近于源熵的速率:单独的算术编码器用于大量可能的条件类。这大大减少了可用于学习的样本数据量。为了解决这一问题,在文献中将其称为上下文稀释问题,需要在使用高级上下文建模的好处和与上下文稀释相关的学习成本之间取得平衡。在本文的第一部分,我们提出了一种上下文量化方法来解决非二进制源的上下文稀释问题。它从大量的条件类开始,然后使用聚类过程将上下文的数量减少到所需的大小。实践中的主要操作困难是如何描述上下文空间的复杂分区。为了解决这个问题,我们提出了两种新颖的方法,即粗糙上下文量化(CCQ)和熵编码状态序列(ECSS),用于有效描述上下文书,该书完全指定了上下文量化器映射信息。本文的第二部分考虑了非二进制源的二值化。与非二进制源相同,将复杂上下文描述作为辅助信息发送的成本非常高。到目前为止,所有上下文量化器都是脱机设计的,并且针对训练集的统计数据进行了优化。处理训练集和输入图像之间的不匹配的问题在很大程度上仍未得到解决。为了解决这个问题,我们提出了三种新颖的方案,最小描述长度,图像相关和最小自适应代码长度。实验结果表明,我们的方法优于JBIG和JBIG2标准,在所选的半色调图像集上,其峰值压缩率分别提高了24%和11%。在本文的第三部分,我们将研究扩展到量化器和熵编码器的联合设计。我们提出了一种基于上下文的分类和自适应量化方案,该方案本质上以相同的过程产生了有限状态量化器和熵编码器。关键字。上下文,熵编码,上下文量化,图像压缩。

著录项

  • 作者

    Jin, Tong.;

  • 作者单位

    Simon Fraser University (Canada).;

  • 授予单位 Simon Fraser University (Canada).;
  • 学科 Engineering Electronics and Electrical.
  • 学位 Ph.D.
  • 年度 2006
  • 页码 109 p.
  • 总页数 109
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号